entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.10190v1
20230708141246
Summary of the 3rd BINA Workshop
[ "Eugene Semenko", "Manfred Cuntz" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.SR" ]
1]Eugene Semenko 2]Manfred Cuntz [1]National Astronomical Research Institute of Thailand (Public Organization) 260 Moo 4, T. Donkaew, A. Maerim, Chiangmai, 50180 Thailand [2]Department of Physics, University of Texas at Arlington, Arlington, TX 76019, USA Summary of the BINA Workshop [ ============================ BINA-3 has been the third workshop of this series involving scientists from India and Belgium aimed at fostering future joint research in the view of cutting-edge observatories and advances in theory. BINA-3 was held at the Graphic Era Hill University, 22-24 March 2023 at Bhimtal (near Nainital), Uttarakhand, India. A major event was the inauguration of the International Liquid-Mirror Telescope (ILMT), the first liquid mirror telescope devoted exclusively to astronomy. BINA-3 provided impressive highlights encompassing topics of both general astrophysics and solar physics. Research results and future projects have been featured through invited and contributed talks, and poster presentations. § INDO-BELGIAN COLLABORATION IN SPACE AND TIME Without comprehensive international collaborations, it is difficult to imagine sustainable scientific progress in the modern age. In astronomy and astrophysics, such collaborations enabled the operation of observational facilities in the best places on the ground and in space. In big international cooperations like the European Southern Observatory, we can see how the technology exchange and mobility of human resources promote research on all levels, from universities to international institutions. Especially promising collaborations pertain to India, the world's most populous country according to the United Nations <cit.>, with exceptionally rapid economic growth. The Belgo-Indian Network for Astronomy and Astrophysics, or BINA, was initialized in 2014 to foster the existing contacts between Indian and Belgian researchers, mostly from the Aryabhatta Research Institute of Observational Sciences (ARIES) and the Royal Observatory of Brussels (ROB), and to expand this collaboration on the nation-wide scale in both countries. The third BINA workshop, which we have the pleasure of summarizing, marks the end of this project. Two previous workshops were held in 2016 in Nainital (India) and 2018 in Brussels (Belgium). We believe that our summary would not be complete without a brief comparison of the third workshop with the two preceding ones. This will help us to better understand BINA's importance and outcome. The first workshop (BINA-1) took place in Nainital on 15–18 November 2016. According to available statistics <cit.>, 107 astronomers from eight countries participated in the meeting, giving 36 oral talks and presenting 42 posters. Eighty-eight people from twelve partner institutes represented the Indian astronomical community, whereas six Belgian institutions sent ten representatives. The meetings' agenda focused primarily on the instrumentation of the newly commissioned 3.6-m Devastal Optical Telescope (DOT) and on the future of the 4-m International Liquid-Mirror Telescope (ILMT). The scientific talks covered a wide range of subjects, from solar system studies to individual stars, stellar clusters, exoplanets and extragalactic astronomy. The second BINA workshop (BINA-2) was held two years later, in 2018, in Brussels; it was aimed to further expand the existing collaborations. Despite the significantly smaller number of participants (i.e., 69 registered researchers from seven countries), the conference's scientific programme was rich in oral talks, totalling 44. Furthermore, there were eight poster presentations <cit.>. The scientific programme of the second workshop largely mirrored the agenda of the first meeting, accentuating the scientific application of the Belgo-Indian telescopes. A highly notable aspect of the second workshop's scientific programme was the presence of the review talks. In terms of participation and the number of oral talks, BINA-3, the final workshop, resembles the previous events, although, fortunately, a significant increase in participation and contributions occurred. Nearly one hundred fifty scientists from eleven countries participated in BINA-3, with the lion's share from India and Belgium. A total of 37 talks (10: invited, 27: contributory) talks were given in the main programme, and 21 contributory talks were given in the solar physics sessions. There have been 81 poster presentations; many of those were led by graduate and undergraduate students. There is significant progress hiding behind the numbers. Since 2016, the Belgo-Indian network has grown to involve new institutes from both partner countries. The members published numerous scientific papers with results obtained on the Belgo-Indian telescopes. Many of these were based on PhD theses pursued within BINA. The content of these proceedings, during 2016–2023, also reveals that many young researchers changed their affiliation, moving to new places and thus expanding the network of research contacts. Progress in instrumentation and scientific collaboration within BINA and with external institutes worldwide gave new impulses to solar and general physics studies. In general, we can count the significantly increased number of telescopes and instruments as the major indicator of progress achieved within the BINA project. The list of available instruments has been highly influential on BINA-3. In the following sections, we briefly summarize its scientific programme. § OBSERVATIONAL TECHNIQUES AND INSTRUMENTATION Telescopes and their instruments were in the spotlight of all BINA workshops. The ILMT has become the central theme of the current meeting. From a number of oral talks and poster presentations, one could get a comprehensive view of such telescopes' operation principles. It was particularly interesting to find out about the data reduction, calibration and access to the processed images obtained with the ILMT. Numerous results of the first observations with the ILMT, shown mostly in the poster presentations, have demonstrated a wide range of possible scientific applications of zenith telescopes with liquid mirrors. Given the short time that has passed since the beginning of the operation and obtained results, we can confirm that the ILMT has proven its scientific concept and significantly strengthened the observational facilities for the current and future Indo-Belgian projects. The Indo-Belgian 3.6-m Devastal Optical Telescope (DOT) remains Asia's largest so far fully steerable optical telescope, which has been in operation since 2016. Yet, accurately by the time of BINA-3, a park of Indian telescopes received strengthening with the commissioning of the 2.5-m telescope, which was built by the Advanced Mechanical and Optical Systems (AMOS) in Belgium for the Physical Research Laboratory (PRL) in Ahmedabad and installed at Mt Abu, Rajasthan, India. The development of new instruments and the upgrade of existing facilities was the central theme of the instrumentation section of the current conference. Notably, by 2028, the TIFR-ARIES Multi-Object Optical to Near-infrared Spectrograph (TA-MOONS) will bring new capabilities useful for the studies of stars in star formation regions, open clusters, and extended sources with DOT. Also, for this telescope, adding the polarimetric mode to the Aries-Devasthal Faint Object Spectrograph & Camera (ADFOSC), the existing device for observations of faint objects, will enable both linear and circular polarimetry. This new regime is of critical importance to the study of processes in star-forming regions, interacting stellar systems, supernovae, active galactic nuclei, and beyond. A spectropolarimetric mode might be a case to think of for the creators of the PRL Advanced Radial Velocity Abu Sky Search-2 (PARAS-2), a high-resolution spectrograph at the 2.5-m PRL telescope at Mt Abu. This highly stable device has been developed for precise measurements of radial velocities while providing very high spectral resolution. Due to the geographical location of Mt Abu, PARAS-2 can play a critical role in the continuous monitoring of radial velocities for a wide variety of relatively bright objects; however, with a spectropolarimetric mode being implemented (like HARPSpol at the High Accuracy Radial velocity Planet Searcher (HARPS); ), PARAS-2 can take its niche in observations of hot magnetic stars, either within Indo-Belgian collaboration or in third-party projects like MOBSTER <cit.>. (MOBSTER is an acronym for Magnetic OB[A] Stars with TESS: probing their Evolutionary and Rotational properties; it is a collaboration of more than 60 scientists from over the world.) With the completion of a High-Resolution Spectrograph for the 3.6-m Devastal Optical Telescope (DOT-HRS), the astronomical community of ARIES will possess the ability to independently carry out studies in the fields of asteroseismology and stellar abundances. Again, like in the case of PARAS-2, spectropolarimetry with DOT-HRS is expected to increase the list of potential applications of this device and could further expand the ongoing Nainital-Cape survey of pulsating early-type stars <cit.>. The rising number of telescopes in India poses questions about the most adequate time allocation policies and the optimal distribution of observational proposals between existing astronomical facilities. We found that the analysis of the time allocation for the 3.6-m DOT regarding the last six observational cycles, as presented at the workshop, indicated that it was particularly useful and appropriate for all facilities of ARIES — especially considering that the ILMT has started its operation and the upcoming arrival of the next-generation instruments for the 3.6-m DOT. From our perspective, in addition to the proposed improvements, we would also recommend the organisation of regular (e.g., on a yearly basis) conferences of the telescope's users under the auspices of the Time Allocation Committee (TAC), where the existing and potential applicants would be able to present their proposals or give feedback on the approved or running programmes. Such mini-conferences could be held online, speeding up communication between the TAC and the astronomical community. Naturally, this experience could be applied to other instruments in India and beyond as well. The theme of small telescopes has been raised in several talks. The Belgium-made High-Efficiency and high-Resolution Mercator Echelle Spectrograph (HERMES), operated at the 1.25-m Mercator telescope in La Palma (Spain), proved its effectiveness in studies of the chemical composition of single and multiple stars. This spectrograph is used for existing bilateral projects. Complimentary opportunities for high-resolution spectroscopy with the 1-m-class telescopes and the perspectives of affordable implementation of adaptive optics on small and moderate-size telescopes have been considered in BINA-3. The interest in these problems highlights the importance of small, properly equipped telescopes for big programmes complementary to missions like the Transiting Exoplanet Survey Satellite (TESS). § MAIN PROGRAMME SESSION BINA provides access to a wide variety of observational facilities located worldwide <cit.>. The observational component mostly determined the agenda of the BINA-3. Comets, planets, asteroids, and orbital debris were in the third BINA workshop's spotlight, though other topics such as stars, including stellar multiplicity, and compact objects have been discussed. The selection of objects is largely determined by the areas where optical spectroscopy and photometry are most effective with small and medium-sized telescopes. The exception is the study of planetary atmospheres using the method of stellar occultations. Similar techniques require bigger apertures, and being implemented in a 3–6-m class of telescopes can be very beneficial. The 3.6-m DOT is among those few instruments on the planet which have regularly been used for observation of such events <cit.>. Various instruments available within the Indo-Belgian collaboration promote the comprehensive study of processes occurring in star formation regions and during the ongoing evolution of stars. The efficiency of multi-wavelength observations was demonstrated in the example of the study of the star formation H ii region Sh 2-305. However, this is not a unique case where the Indian telescopes exploring the Universe in optical, radio, and X-ray domains were successfully combined. We cannot pass by the numerous results of the study of massive binary stars, stars with discs and circumstellar envelopes, introduced in the BINA-3 workshop. Stellar multiplicity runs the golden thread through many talks given in Bhimtal during the workshop. As companions significantly influence stellar lifes at all stages of evolution, proper accounting and evaluation of the companions' properties are crucial. In this regard, work with the catalogues of binary stars or their extensive study within the ongoing or future Indo-Belgian projects must receive high priority. In such programmes, high-resolution optical spectroscopy of binary and multiple stars must take a special place. Another problem passing through the scientific content of BINA-3 is stellar magnetism. As pointed out in the workshop, magnetic fields are ubiquitous on and beyond the main sequence, with their strengths varying substantially. Magnetic fields are responsible for different kinds of stellar activity and can impact stellar evolution. Besides the theoretical aspects pertaining to the physics of these processes, we would like to attract attention to the lack of observational facilities in the Asian region suitable to direct observations of stellar magnetic fields and processes. The worldwide selection of medium-sized and big telescopes equipped with sensitive spectropolarimetric devices is very limited, and Indian telescopes could fill this gap. Through the study of chemical composition, one can explore the evolution of individual stars, groups of stars, and the Galaxy at large. The last is the central task of galactic archaeology. Pursuing this task depends on the availability of spectra and proper modelling. Despite the various observational results presented in BINA-3, we find a lack of interactions between the BINA members and groups working, e.g., in the U.S., Sweden or Germany, on the theoretical aspects of abundance analysis. We believe tighter cooperation with the institutes outside of BINA would take the research of stellar abundances to a qualitatively new level. In contrast to the previous workshops, asteroseismology, a powerful tool for probing stellar interiors and validating stellar parameters, appears underrepresented in BINA-3. (On a lighter note, a superb cultural show successfully compensated for the lack of “music of the stars” in the conference programme.) This fact looks surprising to us as the Belgian groups in Brussels and Leuven are famous for their proficiency in this field. Apart from galactic archaeology, which deals with the evolution of chemical composition, probing the Galactic structure is another important direction of work within BINA. Even now, after decades of extensive exploration of the Galaxy using different methods, our knowledge of its structure is incomplete. Optical polarimetry helps to reveal the detailed fine structure of dust clouds in the star formation regions or in the areas of young open clusters. Indian astronomers are experienced in this kind of work, and their results, both published <cit.> and presented during BINA-3, deserve special attention. We look forward to further expanding this direction of galactic studies on a new technical level. § SOLAR PHYSICS SESSION The mainframe of the solar physics programme has been the study of small-scale structure, waves, flares as well as coronal mass ejections (CMEs). Science opportunities are often directly associated with instruments such as the Extreme Ultraviolet Imager (EUI) onboard of the Solar Orbiter. The EUI provides a crucial link between the solar surface, on the one hand, and the corona and solar wind, on the other hand, that ultimately shapes the structure and dynamics of the interplanetary medium. Several contributions focused on wave propagation, including their relevance to small-scale structures of the solar chromosphere, transition region and corona, such as flares, spicules and loop systems. This kind of research considered both observations and theoretical work, such as ab-initio simulations for standing waves and slow magneto-acoustic waves. Studies of the outer solar atmosphere also utilized the Interface Region Imaging Spectrograph (IRIS) and the Atmospheric Imaging Assembly (AIA), both onboard of the Solar Dynamics Observatory (SDO). In alignment with previous studies given in the literature, the potential of spectral lines, including line asymmetries, for the identification of solar atmospheric heating processes has been pointed out and carefully examined. Clearly, this approach is relevant to both solar physics and studies of solar-type stars of different ages and activity levels; it allows to embed solar studies into a broader context. Regarding CMEs, a major driver of space weather and geomagnetic stars, attention has been paid the EUropean Heliosphere FORcasting Information Asset (EUHFORIA), which is relevant for MHD modelling and the study of the evolution of CMEs in the heliosphere. In this regard, a pivotal aspect is the study of thermodynamic and magnetic properties of CMEs as well as CME forward-modeling, aimed at predicting CME breakouts as well as CME topologies and magnitudes. Relevant spectral line features include Fe XIV and Fe XI data, obtained with existing instruments or available in the archive. Another notable item has been the presentation of long-term variations of solar differential rotation and the solar cycle; the latter still poses a large set of unanswered scientific questions. § RETROSPECTIVE AND RECOMMENDATIONS A key element of BINA-3 is the future availability of the ILMT. The science goals of ILMT include cosmological research such as the statistical determination of key cosmological parameters through surveying quasars and supernovae as well as photometric variability studies of stars, transiting extra-solar planets and various types of transient events. Another aspect consists in the search for faint extended objects like low-surface brightness and star-forming galaxies. The pronounced use of ILMT, typically in conjunction with other available facilities, requires the ongoing pursuit of international collaborations; this activity is pivotal for future success. Another key aspect is the significance of theoretical studies. Regarding solar physics research, previous work encompasses the study of MHD waves and small-scale transients, with a focus on the solar chromosphere, transition region and corona. Some of this work made extensive use of the EUI onboard of the Solar Orbiter. The study of outer solar atmosphere fine structure utilized the IRIS and the AIA, both onboard of the SDO. Time-dependent coronal studies, especially CMEs, are of great significance for the Earth, such as the onset of geomagnetic storms and the safety of equipment, including those associated with satellite communication[See <https://www.swpc.noaa.gov> for further information.]. Further advances in this field are expected to benefit from additional observational studies as well as advances in theory, particularly the interface of those two. Regarding theoretical work, ongoing and future efforts should continue to focus on 3-D magneto-hydrodynamics studies in conjunction with the adequate inclusion of radiative transfer and statistical phenomena, as well as aspects of chaos theory. There are other items with the potential for future successful developments. Asteroseismology has been underrepresented in BINA-3. This is a powerful tool in the context of stellar evolution studies and the validation and improvement of stellar parameters; the latter is also relevant in the context of extrasolar planet investigations. Further important aspects concern the study of stellar magnetism and activity. Besides elementary stellar studies, these topics are also of critical importance regarding circumstellar habitability and astrobiology at large <cit.>. Moreover, studies of AGNs and GRBs are cardinal topics beyond solar and stellar physics; they have gained considerable steam within the scientific community. Processes in the extragalactic objects are characterized by high energy and rich spectra. Among the variety of works presented during BINA-3, studies of active galactic nuclei (AGN) and different transients like gamma-ray bursts (GRB) continue to deserve special attention. The members of BINA have an exhaustive set of instruments available for multi-wavelength observations of these extragalactic sources, yet there is still room for improvement. Considerable advances are attainable both in instrumentation and in techniques of analysis. In the study of intra-night variability of blazars presented in the workshop's programme <cit.>, we noted the lack of international contributors, although these types of objects are in the spotlight of groups working, e.g., at the 6-m telescope of the Special Astrophysical Observatory, located in the North Caucasus region of Russia <cit.>. Given the absence of polarimetric devices for observation with the 3.6-m DOT at the moment, such cooperation could open new opportunities. Connections established on the personal level between the member institutions of BINA and observatories operating big telescopes would facilitate future studies in extragalactic astronomy where the aperture matters. Similarly, we would recommend establishing collaborations with the institutes operating robotic telescopes for the observation of transients. However, a more radical future step might be an expansion of Indian observational facilities towards other continents, especially South America. A small network of medium-sized fully-robotic telescopes could provide easy access to observations and be used for educational purposes. It would reduce the dependence on astronomical monitoring occurring in South Asia — in consideration of possible drawbacks due to the regional climates. Last but not least, in the field of data analysis, the leitmotif now is the use of machine learning (ML) and artificial intelligence (AI). This theme was raised several times during the workshop, but we believe that it could find broader applications in projects related to the classification of light curves and spectra. At the same time, we would recommend researchers using ML and AI in their work not to ignore advances in theory, as without proper constraints and background information, these methods might lead to impractical results, especially if based on small samples. §.§.§ Acknowledgments The authors are grateful to the scientific and local organizing committees of BINA-3 for inviting them to summarize the workshop and for further assistance in preparing these proceedings. §.§.§ ORCID identifiers of the authors 0000-0002-1912-1342Eugene Semenko 0000-0002-8883-2930Manfred Cuntz §.§.§ Author contributions Both authors equally contributed to this publication. §.§.§ Conflicts of interest The authors declare no conflict of interest. apalike
http://arxiv.org/abs/2307.03949v1
20230708103948
Ergodic observables in non-ergodic systems: the example of the harmonic chain
[ "Marco Baldovin", "Raffaele Marino", "Angelo Vulpiani" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
Institute for Complex Systems - CNR, P.le Aldo Moro 2, 00185, Rome, Italy Université Paris-Saclay, CNRS, LPTMS,530 Rue André Rivière, 91405, Orsay, France Dipartimento di Fisica e Astronomia, Universitá degli Studi di Firenze, Via Giovanni Sansone 1, 50019, Sesto Fiorentino, Italy Dipartimento di Fisica, Sapienza Universitá di Roma, P.le Aldo Moro 5, 00185, Rome, Italy In the framework of statistical mechanics the properties of macroscopic systems are deduced starting from the laws of their microscopic dynamics. One of the key assumptions in this procedure is the ergodic property, namely the equivalence between time averages and ensemble averages. This property can be proved only for a limited number of systems; however, as proved by Khinchin <cit.>, weak forms of it hold even in systems that are not ergodic at the microscopic scale, provided that extensive observables are considered. Here we show in a pedagogical way the validity of the ergodic hypothesis, at a practical level, in the paradigmatic case of a chain of harmonic oscillators. By using analytical results and numerical computations, we provide evidence that this non-chaotic integrable system shows ergodic behavior in the limit of many degrees of freedom. In particular, the Maxwell-Boltzmann distribution turns out to fairly describe the statistics of the single particle velocity. A study of the typical time-scales for relaxation is also provided. Ergodic observables in non-ergodic systems: the example of the harmonic chain Angelo Vulpiani August 12, 2023 ============================================================================== § INTRODUCTION Since the seminal works by Maxwell, Boltzmann and Gibbs, statistical mechanics has been conceived as a link between the microscopic world of atoms and molecules and the macroscopic one where everyday phenomena are observed <cit.>. The same physical system can be described, in the former, by an enormous number of degrees of freedom N (of the same order of the Avogadro number) or, in the latter, in terms of just a few thermodynamics quantities. Statistical mechanics is able to describe in a precise way the behavior of these macroscopic observables, by exploiting the knowledge of the laws for the microscopic dynamics and classical results from probability theory. Paradigmatic examples of this success are, for instance, the possibility to describe the probability distribution of the single-particle velocity in an ideal gas <cit.>, as well as the detailed behavior of phase transitions <cit.> and critical phenomena <cit.>. In some cases (Bose-Einstein condensation <cit.>, absolute negative temperature systems <cit.>) the results of statistical mechanics were able to predict states of the matter that were never been observed before. In spite of the above achievements, a complete consensus about the actual reasons for such a success has not been yet reached within the statistical mechanics community. The main source of disagreement is the so-called “ergodic hypothesis”, stating that time averages (the ones actually measured in physics experiments) can be computed as ensemble averages (the ones appearing in statistical mechanics calculations). Specifically, a system is called ergodic when the value of the time average of any observable is the same as the one obtained by taking the average over the energy surface, using the microcanonical distribution <cit.>. It is worth mentioning that, from a mathematical point of view, ergodicity holds only for a small amount of physical systems: the KAM theorem <cit.> establishes that, strictly speaking, non-trivial dynamics cannot be ergodic. Nonetheless, the ergodic hypothesis happens to work extremely well also for non-ergodic systems. It provides results in perfect agreement with the numerical and experimental observations, as seen in a wealth of physical situations <cit.>. Different explanations for this behavior have been provided. Without going into the details of the controversy, three main points of view can be identified: (i) the “classical” school based on the seminal works by Boltzmann and the important contribution of Khinchin, where the main role is played by the presence of many degrees of freedom in the considered systems  <cit.>; (ii) those, like the Prigogine school, who recognize in the chaotic nature of the microscopic evolution the dominant ingredient <cit.>; (iii) the maximum entropy point of view, which does not consider statistical mechanics as a physical theory but as an inference methodology based on incomplete information <cit.>. The main aim of the present contribution is to clarify, at a pedagogical level, how ergodicity manifests itself for some relevant degrees of freedom, in non-ergodic systems. We say that ergodicity occurs “at a practical level”. To this end, a classical chain of N coupled harmonic oscillators turns out to be an excellent case study: being an integrable system, it cannot be suspected of being chaotic; still, “practical” ergodicity is recovered for relevant observables, in the limit of N≫1. We believe that this kind of analysis supports the traditional point of view of Boltzmann, which identifies the large number of degrees of freedom as the reason for the occurrence of ergodic behavior for physically relevant observables. Of course, these conclusions are not new. In the works of Khinchin (and then Mazur and van der Lynden) <cit.> it is rigorously shown that the ergodic hypothesis holds for observables that are computed as an average over a finite fraction of the degrees of freedom, in the limit of N ≫ 1. Specifically, if we limit our interest to this particular (but non-trivial) class of observables, the ergodic hypothesis holds for almost all initial conditions (but for a set whose probability goes to zero for N →∞), within arbitrary accuracy. In addition, several numerical results for weakly non-linear systems  <cit.>, as well as integrable systems <cit.>, present strong indications of the poor role of chaotic behaviour, implying the dominant relevance of the many degrees of freedom. Still, we think it may be useful, at least from a pedagogical point of view, to analyze an explicit example where analytical calculations can be made (to some extent), without losing physical intuition about the model. The rest of this paper is organized as follows. In Section <ref> we briefly recall basic facts about the chosen model, to fix the notation and introduce some formulae that will be useful in the following. Section <ref> contains the main result of the paper. We present an explicit calculation of the empirical distribution of the single-particle momentum, given a system starting from out-of-equilibrium initial conditions. We show that in this case the Maxwell-Boltzmann distribution is an excellent approximation in the N→∞ limit. Section <ref> is devoted to an analysis of the typical times at which the described ergodic behavior is expected to be observed; a comparison with a noisy version of the model (which is ergodic by definition) is also provided. In Section <ref> we draw our final considerations. § MODEL We are interested in the dynamics of a one-dimensional chain of N classical harmonic oscillators of mass m. The state of the system is described by the canonical coordinates {q_j(t), p_j(t)} with j=1,..,N; here p_j(t) identifies the momentum of the j-th oscillator at time t, while q_j(t) represents its position. The j-th and the (j+1)-th particles of the chain interact through a linear force of intensity κ|q_j+1-q_j|, where κ is the elastic constant. We will assume that the first and the last oscillator of the chain are coupled to virtual particles at rest, with infinite inertia (the walls), i.e. q_0≡ q_N+1≡ 0. The Hamiltonian of the model reads therefore ℋ(𝐪,𝐩)=∑_j=0^N p_j^2/2 m + ∑_j=0^Nm ω_0^2 /2(q_j+1 - q_j)^2, where ω_0=√(κ/m). Such a system is integrable and, therefore, trivially non-ergodic. This can be easily seen by considering the normal modes of the chain, i.e. the set of canonical coordinates Q_k=√(2/N+1)∑_j=1^N q_j sinj k π/N+1 P_k=√(2/N+1)∑_j=1^N p_j sinj k π/N+1 , with k=1, ..., N. Indeed, by rewriting the Hamiltonian in terms of these new canonical coordinates one gets ℋ(𝐐,𝐏)=1/2∑_k=1^N P_k^2/m + ω_k^2 Q_k^2 , where the frequencies of the normal modes are given by ω_k=2 ω_0 sinπ k/2N +2 . In other words, the system can be mapped into a collection of independent harmonic oscillators with characteristic frequencies {ω_k}. This system is clearly non-ergodic, as it admits N integrals of motion, namely the energies E_k=1/2P_k^2/m + ω_k^2 Q_k^2 associated to the normal modes. In spite of its apparent simplicity, the above system allows the investigation of some nontrivial aspects of the ergodic hypothesis, and helps clarifying the physical meaning of this assumption. § ERGODIC BEHAVIOR OF THE MOMENTA In this section we analyze the statistics of the single-particle momenta of the chain. We aim to show that they approximately follow a Maxwell-Boltzmann distribution 𝒫_MB(p)=√(β/2π m)e^-β p^2/2m in the limit of large N, where β is the inverse temperature of the system. With the chosen initial conditions, β=N/E_tot. Firstly, extending some classical results by Kac <cit.>, we focus on the empirical distribution of the momentum of one particle, computed from a unique long trajectory, namely 𝒫_e^(j)p=1 T∫_0^T dt δp -p_j(t) . Then we consider the marginal probability distribution 𝒫_ep,t computed from the momenta {p_j} of all the particles at a specific time t, i.e. 𝒫_ep,t=1 N∑_j=1^N δp -p_j(t) . In both cases we assume that the system is prepared in an atypical initial condition. More precisely, we consider the case in which Q_j(0)=0, for all j, and the total energy E_tot, at time t=0, is equally distributed among the momenta of the first N^⋆ normal modes, with 1 ≪ N^⋆≪ N: P_j(0)= √(2m E_tot/N^⋆) for 1 ≤ j ≤ N^⋆ 0 for N^⋆< j ≤ N . In this case, the dynamics of the first N^⋆ normal modes is given by Q(t) =√(2 E_tot/ω_k^2N^⋆)sinω_k t P(t) =√(2 m E_tot/N^⋆)cosω_k t . §.§ Empirical distribution of single-particle momentum Our aim is to compute the empirical distribution of the momentum of a given particle p_j, i.e., the distribution of its values measured in time. This analytical calculation was carried out rigorously by Mazur and Montroll in Ref. <cit.>. Here, we provide an alternative argument that has the advantage of being more concise and intuitive, in contrast to the mathematical rigour of <cit.>. Our approach exploits the computation of the moments of the distribution; by showing that they are the same, in the limit of infinite measurement time, as those of a Gaussian, it is possible to conclude that the considered momentum follows the equilibrium Maxwell-Boltzmann distribution. The assumption N≫1 will enter explicitly the calculation. The momentum of the j-th particle can be written as a linear combination of the momenta of the normal modes by inverting Eq. (<ref>): p_j(t) =√(2/N+1)∑_k=1^N sinj k π/N+1 P_k(t) =2√(m E_tot/(N+1)N^⋆)∑_k=1^N^⋆sinkjπ/N+1cosω_k t where the ω_k's are defined by Eq. (<ref>), and the dynamics (<ref>) has been taken into account. The n-th empirical moment of the distribution is defined as the average p_j^n of the n-th powerof p_j over a measurement time T: p_j^n =1/T∫_0^Tdt p_j^n(t) =1/T∫_0^Tdt (C_N^⋆)^n ∏_l=1^n∑_k_l=1^N^⋆sink_l jπ/N+1cosω_k_l t =(C_N^⋆)^n ∑_k_1=1^N^⋆…∑_k_n=1^N^⋆sink_1jπ/N+1 …sink_njπ/N+1 1/T∫_0^Tdt cosω_k_1 t…cosω_k_n t with C_N^⋆=2√(m E_tot/(N+1)N^⋆) . We want to study the integral appearing in the last term of the above equation. To this end it is useful to recall that 1/2 π∫_0^2πd θcos^n(θ)= (n-1)!!/n!! for n even 0 for n odd . As a consequence, one has 1/T∫_0^Td t cos^n(ω t)≃(n-1)!!/n!! for n even 0 for n odd . Indeed, we are just averaging over ≃ω T/2 π periods of the integrated function, obtaining the same result we get for a single period, with a correction of the order O(ω T)^-1. This correction comes from the fact that T is not, in general, an exact multiple of 2 π/ω. If ω_1, ω_2, ..., ω_q are incommensurable (i.e., their ratios cannot be expressed as rational numbers), provided that T is much larger than (ω_j-ω_k)^-1 for each choice of 1 ≤ k < j ≤ q, a well known result <cit.> assures that 1/T∫_0^Td t cos^n_1(ω_1 t)·...·cos^n_q(ω_q t) ≃ 1/T∫_0^Td t cos^n_1(ω_1 t)·...·1/T∫_0^Td t cos^n_q(ω_1 t) ≃ (n_1-1)!!/n_1!!· ...·(n_q-1)!!/n_q!! if all n's are even , where the last step is a consequence of Eq. (<ref>). Instead, if at least one of the n's is odd, the above quantity vanishes, again with corrections due to the finite time T. Since the smallest sfrequency is ω_1, one has that the error is at most of the order Oq(ω_1 T)^-1≃ O(qN /ω_0 T). Let us consider again the integral in the last term of Eq. (<ref>). The ω_k's are, in general, incommensurable. Therefore, the integral vanishes when n is odd, since in that case at least one of the {n_l}, l=1,...,q, will be odd. When n is even, the considered quantity is different from zero as soon as the k's are pairwise equal, so that n_1=...=n_q=2. In the following we will neglect the contribution of terms containing groups of four or more equal k's: if n≪ N^⋆, the number of these terms is indeed ∼ O(N^⋆) times less numerous than the pairings, and it can be neglected if N^⋆≫1 (which is one of our assumptions on the initial condition). Calling Ω_n the set of possible pairings for the vector 𝐤=(k_1,...,k_l), we have then p_j^n≃C_N^⋆/√(2)^n ∑_𝐤∈Ω_n∏_l=1^n sink_ljπ/N+1 , with an error of O(1/N^⋆) due to neglecting groups of 4, 6 and so on, and an error O(nN/ω_0 T) due to the finite averaging time T, as discussed before. Factor 2^-n/2 comes from the explicit evaluation of Eq. (<ref>) . At fixed j, we need now to estimate the sums appearing in the above equation, recalling that the k's are pairwise equal. If j> N/N^⋆, the arguments of the periodic functions can be thought as if independently extracted from a uniform distribution 𝒫(k)=1/N^⋆. One has: sin^2 kj π/N+1≃∑_k=1^N^⋆1/N^⋆sin^2 kj π/N+1≃1/2 π∫_-π^πd θ sin^2(θ)=1/2 , and ∏_l=1^n sink_ljπ/N+1≃ 2^-n/2 , if 𝐤∈Ω_n. As a consequence p_j^n ≃C_N^⋆/2^n (N^⋆)^n/2 𝒩(Ω_n)≃m E_tot/N+1^n/2𝒩(Ω_n) , where 𝒩(Ω_n) is the number of ways in which we can choose the pairings. These are the moments of a Gaussian distribution with zero average and m E_tot/N+1 variance. Summarising, it is possible to show that, if n ≪ N^⋆≪ N, the first n moments of the distribution are those of a Maxwell-Boltzmann distribution. In the limit of N≫1 with N^⋆/N fixed, the Gaussian distribution is thus recovered up to an arbitrary number of moments. Let us note that the assumption Q_j(0)=0, while allowing to make the calculations clearer, is not really relevant. Indeed, if Q_j(0)≠ 0 we can repeat the above computation while replacing ω_k t by ω_k t + ϕ_k, where the phases ϕ_k take into account the initial conditions. Fig. <ref> shows the standardized histogram of the relative frequencies of single-particle velocities of the considered system, in the N ≫ 1 limit, with the initial conditions discussed before. As expected, the shape of the distribution tends to a Gaussian in the large-time limit. §.§ Distribution of momenta at a given time A similar strategy can be used to show that, at any given time t large enough, the histogram of the momenta is well approximated by a Gaussian distribution. Again, the large number of degrees of freedom plays an important role. We want to compute the empirical moments p^n(t)=1/N∑_j=1^N p_j^n(t) , defined according to the distribution 𝒫_e^(j)p introduced by Eq. (<ref>). Using again Eq. (<ref>) we get p^n(t)= 1/N∑_j=1^N(C_N^⋆)^n∑_k=1^N^⋆sinkjπ/N+1cosω_k t^n = 1/N(C_N^⋆)^n∑_k_1^N^⋆…∑_k_n=1^N^⋆∏_l=1^Ncosω_k_lt∑_j=1^Nsink_1 j π/N+1…sink_n j π/N+1 . Reasoning as before, we see that the sum over j vanishes in the large N limit unless the k's are pairwise equal. Again, we neglect the terms including groups of 4 or more equal k's, assuming that n≪ N^⋆, so that their relative contribution is O(1/N^⋆). That sum selects paired values of k for the product inside the square brackets, and we end with p^n(t)≃1/N(C_N^⋆)^n∑_𝐤∈Ω_n∏_l=1^Ncosω_k_lt . If t is “large enough” (we will come back to this point in the following section), different values of ω_k_l lead to completely uncorrelated values of cos(ω_k_l t). Hence, as before, we can consider the arguments of the cosines as extracted from a uniform distribution, obtaining p^n(t)≃C_N^⋆/2^n (N^⋆)^n/2 𝒩(Ω_n)≃m E_tot/N+1^n/2𝒩(Ω_n) . These are again the moments of the equilibrium Maxwell-Boltzmann distribution. We had to assume n ≪ N^⋆, meaning that a Gaussian distribution is recovered only in the limit of large number of degrees of freedom. The empirical distribution can be compared with the Maxwell-Boltzmann by looking at the Kullback-Leibler divergence K(𝒫_e(p,t), 𝒫_MB(p)) which provides a sort of distance between the empirical 𝒫_e(p,t) and the Maxwell-Boltzmann: K[𝒫_e(p,t), 𝒫_MB(p)]= - ∫𝒫_e(p,t) ln𝒫_MB(p)/𝒫_e(p,t) dp. Figure <ref> shows how the Kullback-Leibler divergences approach their equilibrium limit, for different values of N. As expected, the transition happens on a time scale that depends linearly on N. A comment is in order: even if this behaviour may look similar to the H-Theorem for diluited gases, such a resemblance is only superficial. Indeed, while in the cases of diluited gases the approach to the Maxwell-Boltzmann is due to the collisions among different particles that actually exchange energy and momentum, in the considered case the “thermalization” is due to a dephasing mechanism. § ANALYSIS OF THE TIME SCALES In the previous section, when considering the distribution of the momenta at a given time, we had to assume that t was “large enough” in order for our approximations to hold. In particular we required cos(ω_k_1t) and cos(ω_k_2t) to be uncorrelated as soon as k_1 k_2. Such a dephasing hypothesis amounts to asking that |ω_k_1t-ω_k_2t|> 2π c , where c is the number of phases by which the two oscillator have to differ before they can be considered uncorrelated. The constant c may be much larger than 1, but it is not expected to depend strongly on the size N of the system. In other words, we require t> c/|ω_k_1-ω_k_2| for each choice of k_1 and k_2. To estimate this typical relaxation time, we need to pick the minimum value of |ω_k_1-ω_k_2| among the possible pairs (k_1,k_2). This term is minimized when k_1=k̃ and k_2=k̃-1 (or vice-versa), with k̃ chosen such that ω_k̃-ω_k̃-1 is minimum. In the large-N limit this quantity is approximated by ω_k̃-ω_k̃-1=ω_0sink̃π/2N+2-ω_0sink̃π- π/2N+2≃ω_0cosk̃π/2N+2π/2N+2 , which is minimum when k̃ is maximum, i.e. for k̃=N^⋆. Dephasing is thus expected to occur at t> 4cN/ω_0cosN^⋆π/2N , i.e. t>4cN/ω_0 in the N^⋆/N ≪ 1 limit. It is instructive to compare this characteristic time with the typical relaxation time of the “damped” version of the considered system. For doing so, we assume that our chain of oscillators is now in contact with a viscous medium which acts at the same time as a thermal bath and as a source of viscous friction. By considering the (stochastic) effect of the medium, one gets the Klein-Kramers stochastic process <cit.> ∂ q_j/∂ t=p_j/m ∂ p_j/∂ t=ω_0^2(q_j+1 - 2 q_j + q_j-1) -γ p_j + √(2 γ T)ξ_j where γ is the damping coefficient and T is the temperature of the thermal bath (we are taking the Boltzmann constant k_B equal to 1). Here the {ξ_j} are time-dependent, delta-correlated Gaussian noises such that ξ_j(t)ξ_k(t')=δ_jkδ(t-t'). Such a system is surely ergodic and the stationary probability distribution is the familiar equilibrium one 𝒫_s(𝐪,𝐩) ∝ e^-H(𝐪,𝐩)/T. Also in this case we can consider the evolution of the normal modes. By taking into account Eqs. (<ref>) and (<ref>) one gets Q̇_̇k̇ =1/m P_k Ṗ_̇k̇ =- ω_k^2 Q_k - γ/m P + √(2 γ T)ζ_k where the {ζ_k} are again delta-correlated Gaussian noises. It is important to notice that also in this case the motion of the modes is independent (i.e. the friction does not couple normal modes with different k); nonetheless, the system is ergodic, because the presence of the noise allows it to explore, in principle, any point of the phase-space. The Fokker-Planck equation for the evolution of the probability density function 𝒫Q_k,P_k,t of the k-th normal mode can be derived using standard methods <cit.>: ∂_t𝒫=-∂_Q_kP_k𝒫+∂_P_kω_k^ 2Q_k𝒫+γ/mP_k𝒫+γ T∂_P_k^2 𝒫 . The above equation allows to compute also the time dependence of the correlation functions of the system in the stationary state. In particular one gets d/dtQ_k(t) Q_k(0)=1/mP_k(t)Q_k(0) and d/dtP_k(t) Q_k(0)-ω_k^2 m Q_k(t) Q_k(0) -γ/mP_k(t) Q_k(0) , which, once combined together, lead to d^2/d t^2Q_k(t) Q_k(0)+γ/md/dtQ_k(t) Q_k(0)+ ω_k^2Q_k(t) Q_k(0)=0 . For ω_k <γ/m the solution of this equation admits two characteristic frequencies ω̃_±, namely ω̃_±=γ/2m1 ±√(1-m^2 ω_k^2/γ^2). In the limit ω_k ≪γ/m one has therefore ω̃_- ≃m/4 γω_k^2 ≃m ω_0^2 π^2 k^2/γ N^2 . Therefore, as a matter of fact, even in the damped case the system needs a time that scales as N^2 in order to get complete relaxation for the modes. As we discussed before, the dephasing mechanism that guarantees for “practical” ergodicity in the deterministic version is instead expected to occur on time scales of order O(N). § CONCLUSIONS The main aim of this paper was to expose, at a pedagogical level, some aspects of the foundation of statistical mechanics, namely the role of ergodicity for the validity of the statistical approach to the study of complex systems. We analyzed a chain of classical harmonic oscillators (i.e. a paradigmatic example of integrable system, which cannot be suspected to show chaotic behaviour). By extending some well-known results by Kac <cit.>, we showed that the Maxwell-Bolzmann distribution approximates with arbitrary precision (in the limit of large number of degrees of freedom) the empirical distribution of the momenta of the system, after a dephasing time which scales with the size of the chain. This is true also for quite pathological initial conditions, where only a small fraction of the normal modes is excited at time t=0. The scaling of the typical dephasing time with the number of oscillators N may appear as a limit of our argument, since this time will diverge in the thermodynamic limit; on the other hand one should consider, as explicitely shown before, that the damped version of this model (which is ergodic by definition) needs times of the order O(N^2) to reach thermalization for each normal mode. This comparison clearly shows that the effective thermalization observed in large systems has little to do with the mathematical concept of ergodicity, and it is instead related to the large number of components concurring to define the global observales that are usually taken into account (in our case, the large number of normal modes that define the momentum of a single particle). When these components cease to be in phase, the predictions of statistical mechanics start to be effective; this can be observed even in integrable systems, without need for the mathematical notion of ergodicity to hold. In other words, we believe that the present work give further evidence of the idea (which had been substantiated mathematically by Khinchin, Mazur and van der Linden) that the most relevant ingredient of statistical mechanics is the large number of degrees of freedom, and the global nature of the observables that are typically taken into account. § ACKNOWLEDGEMENTS RM is supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) "A Multiscale integrated approach to the study of the nervous system in health and disease" (DN. 1553 11.10.2022).
http://arxiv.org/abs/2307.04162v1
20230709125749
A threshold model of plastic waste fragmentation: New insights into the distribution of microplastics in the ocean and its evolution over time
[ "Matthieu George", "Frédéric Nallet", "Pascale Fabre" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.mtrl-sci" ]
Laboratoire Charles-Coulomb, UMR 5221 CNRS – université de Montpellier, Campus Triolet, Place Eugène-Bataillon – CC069, F-34095 Montpellier Cedex 5 – FRANCE Centre de recherche Paul-Pascal, UMR 5031 CNRS – université de Bordeaux, 115 avenue du Docteur-Schweitzer, F-33600 Pessac – FRANCE [Email for correspondence: ][email protected] Laboratoire Charles-Coulomb, UMR 5221 CNRS – université de Montpellier, Campus Triolet, Place Eugène-Bataillon – CC069, F-34095 Montpellier Cedex 5 – FRANCE Plastic pollution in the aquatic environment has been assessed for many years by ocean waste collection expeditions around the globe or by river sampling. While the total amount of plastic produced worldwide is well documented, the amount of plastic found in the ocean, the distribution of particles on its surface and its evolution over time are still the subject of much debate. In this article, we propose a general fragmentation model, postulating the existence of a critical size below which particle fragmentation becomes extremely unlikely. In the frame of this model, an abundance peak appears for sizes around 1mm, in agreement with real environmental data. Using, in addition, a realistic exponential waste feed to the ocean, we discuss the relative impact of fragmentation and feed rates, and the temporal evolution of microplastics (MP) distribution. New conclusions on the temporal trend of MP pollution are drawn. A threshold model of plastic waste fragmentation: new insights into the distribution of microplastics in the ocean and its evolution over time Pascale Fabre August 12, 2023 ============================================================================================================================================== § INTRODUCTION Plastic waste has been dumped into the environment for nearly 70 years, and more and more data are being collected in order to quantify the extent of this pollution. Under the action of degradation agents (UV, water, stress), plastic breaks down into smaller pieces that gradually invade all marine compartments. If the plastic pollution awareness initially stemmed from the ubiquitous presence of macro-waste, it has now become clear that the most problematic pollution is “invisible” i.e. due to smaller size debris, and the literature exploring microplastics (MPs, size between 1 μm and 5 mm) and nanoplastics (NPs, size below 1 μm) quantities and effects is rapidly increasing. The toxicity of plastic particles being dependent on their size and their concentration, it is crucial to know these two parameters in the natural environment to better predict their impacts. While the total amount of plastic produced worldwide is well-documented <cit.>, the total amount of plastic found in the ocean and its time evolution are still under debate: while many repeated surveys and monitoring efforts have failed to demonstrate any convincing temporal trend <cit.>, increasing amounts of plastic are found in some regions, especially in remote areas, and a global increase from ca. 2005 has been suggested <cit.>. Still, some features can be drawn from the available data from the field <cit.> about the size distribution of plastic particles. When browsing the sizes from the largest to the smallest, a first abundance peak is observed around 1 mm <cit.>. Between 1 mm and approximately 150 μm, very few particles are found <cit.>. The abundance increases again from 150 μm down to 10 μm, with an amount of particles which is several orders of magnitude larger than what is found around 1 mm <cit.>. To the best of our knowledge, the physical reason <cit.> for the existence of two very different size classes for microplastics (small MP <150 μm, large MP between 1 and 5 mm) is that there are two fragmentation pathways: i) bulk fragmentation with iterative splitting of one piece into two daughters for large MPs, and ii) delamination and disintegration of a thin surface layer (around 100 μm depth) into many particles for small MPs. This description does however not explain the deficit of microplastics of sizes between 150 μm and 1 mm. Early authors attempted to describe the large MP distribution by invoking a simple iterative fragmentation of plastic pieces into smaller objects, conserving the total plastic mass <cit.>, in accordance to pathway i). These models lead to a time-invariant power-law dependence of the MP abundance with size (refer to Supplementary Information <ref> for an elementary version of such models), which is in fair agreement with experimental observations for large MP. However, they fail to describe the occurrence of an abundance peak and the subsequent decrease of the number of MP when going to smaller sizes. Other mechanisms such as sinking, ingestion, etc. have been invoked to qualitatively explain the absence of particles smaller than 1 mm. Very recently, two papers have addressed this issue using arguments related to the fragmentation process itself. Considering the mechanical properties of a one-dimensional material (flexible and brittle fibres) submitted to controlled stresses in laboratory mimicking ocean turbulent flow, Brouzet et al <cit.> have shown both theoretically and experimentally in the one-dimensional case that smaller pieces are less likely to break. Aoki and Furue <cit.> reached theoretically the same conclusion in a two-dimensional case using a statistical mechanics model. Note that both approaches are based on the classical theory of rupture, insofar as plastics fragmenting at sea have generally been made brittle by a long exposure to UVs. In this paper, we also explore pathway i), keeping out of focus pathway ii), since delamination process produces directly very small plastic pieces. Regardless of the fracture mechanics details i.e. the specific characteristics of the plastic waste (shape, elastic moduli, aging behavior) and the exerted stresses, we postulate the existence of a critical size below which bulk fragmentation becomes extremely unlikely. Since many of the microplastics recovered from the surface of the ocean are film-like objects (two dimensions exceeding by a large margin the third one) like those coming from packaging, we construct the particle size distribution over time based on the very idea of a universal failure threshold for breaking two-dimensional objects. A very simple hand-waving argument from everyday's life that illustrates this breaking threshold, is that the smaller a parallelepipedic piece of sugar is, the harder it is to break it, hence the nickname sugar lump model used in this paper. Unlike many previous models, which make the implicit assumption of a stationary distribution, we explicitly describe the temporal evolution of the large MP quantity (see Sections <ref> and <ref>). Moreover, by injecting a realistic waste feed into the model, we discuss the synergistic effect of feeding and fragmentation rates on the large MP distribution, in particular in terms of evolution with time, and compare to the observed data in Section <ref>. § FRAGMENTATION MODEL WITH THRESHOLD The sugar lump iterative model implements the two following essential features: a size-biased probability of fragmentation on the one hand, and a controlled waste feed rate on the other hand. Initially, a constant feeding rate is used in the model. In a second step, the more realistic assumption of an exponentially growing feeding rate is introduced and discussed in comparison with field data (See Section <ref>). At each iteration, we assume that the ocean is fed with a given amount of large parallelepipedic fragments of length L_init, width ℓ_init and thickness h, where h is much smaller than the other two dimensions and length L_init is, by convention, larger than width ℓ_init. At each time step, every fragment potentially breaks into two parallelepipedic pieces of unchanged thickness h. The total volume (or mass) is kept invariant during the process. In addition, we assume that, if the fragment ever breaks during a given step, it always breaks perpendicular to its largest dimension L: A fragment of dimensions (L, ℓ, h) thus produces two fragments of respective dimensions (ρ L,ℓ,h) and ([1-ρ]L,ℓ,h), ρ being in our model a random number between 0 and 0.5. Note that, depending on the initial values of L,ℓ and ρ, one or both of the new dimensions ρ L and [1-ρ]L may become smaller than the previous intermediate size ℓ: the fragmentation of a film-like object, at contrast to the case of a fibre-like object, is not conservative in terms of its largest dimension <cit.>. Furthermore, in order to ensure that the fragment thickness h remains (nearly) constant all along the fragmentation process, ρ values leading to ρ L or (1-ρ)L significantly less than h are rejected in the simulation. This obviously introduces a short length scale cutoff, in the order of h, and a limiting, nearly cubic, shape for the smaller fragments (an “atomic limit”, according to the ancient Greek meaning). A second length scale, L_c, also enters the present model, originating in the mechanical sugar lump approach, described heuristically by means of a breaking efficiency E(L) sigmoidal in L. For the sake of convenience, this efficiency is built here from the classical Gauss error function. It is therefore close to 1 above a threshold value L_c (chosen large enough compared to h) and close to 0 below L_c. A representative example is shown in Fig. <ref>, with L_c/h=100. Note that throughout this paper, all lengths involved in the numerical model will be scaled by the thickness h. Qualitatively speaking, this feature of the model means that when the larger dimension L is below the threshold value L_c, fragments will “almost never” break, even if they haven't reached yet the limiting (approximately) cubic shape of fragments of size ≈ h. For the sake of simplicity, the threshold value is assumed not to depend on plastic type or on residence time in the ocean, considering that weathering occurs from the moment the waste is thrown in the environment and quickly renders all common plastics brittle. A unique L_c is thus used for all fragments. Technical details about the model are given in supplementary information <ref>. § RESULTS AND COMPARISON WITH FIELD DATA In this whole section, we discuss the results obtained with the sugar lump model and systematically compare with what we call the standard model  <cit.>, that is to say the case where fragments always break into two (identical) pieces at each generation, whatever their size. Whenever possible and meaningful, we also compare our results with available field data. Therefore, one needs to assign a numerical correspondence between the physical time scale and the duration of a step in the iterative models. The fragmentation rate of plastic pieces can be assessed using accelerated aging experiments <cit.>. The half-life time, corresponding to the time when the average particle size is divided by 2, has been found around 1000 hours, which roughly corresponds to one year of solar exposition <cit.>. Hence, the iterative step t used in all following sections can be considered to be in the order of one year. For typical plastic film dimensions, it is reasonable to assume that the thickness h is between 10 and 50 μm, and the initial largest lateral dimension L_init is in the range of 1 to 5 cm. These characteristic lengths, together with the other length scales involved in this paper are positioned relative to each other in Fig. <ref>. §.§ Evolution of the size distribution and of the total abundance of fragments with time The size distribution of plastic fragments over time is represented in Fig. <ref> for the sugar lump and confronted to the standard model size distribution. The origin of time corresponds to the date when the very first plastic waste was dumped into the ocean. According to the standard model (see Eq. (<ref>), Section <ref>), the amount of particles as a function of their size follows a power law of exponent -2 which leads to a divergence of the number of particles at very small size (dotted line in Fig  <ref>). For large MP, the prediction of the sugar lump model is broadly similar, i.e. following the same power law. By contrast, the existence of a mechanism inhibiting the break of smaller objects, as introduced in the sugar lump model, does lead to the progressive built of an abundance peak for intermediate size fragments due to the accumulation of fragments with size around L_c (see Section <ref> for details). Moreover, the particle abundance at the peak increases with time while the peak position shifts towards smaller size classes. This shift is fast for the first generations, and then slows down when time passes: Fig. <ref>. The inset in Fig. <ref> shows how the existence of a breaking threshold significantly slows down the production of very small particles compared to the standard model. As can be observed from the inset in Fig. <ref>, the peak position L_peak^th, around L_c, decreases in a small range typically between 1.5L_c and 0.5L_c for time periods up to a few tens of years. Let us discuss now the comparison to the experimental data. A sample of various field data from different authors  <cit.> is displayed in Fig. <ref>. In order to obtain a collapse of the data points for large MPs, a vertical scaling factor has been applied, since abundance values from different sources can not be directly compared in absolute units. The two main features of these curves are: A maximum abundance at a value of a few millimeters (indicated by a grey zone) and the collapse of the data points onto a single 1/L^2 master curve (indicated by a dashed line). The threshold value L_c is presumably defined by the energy balance between the bending energy required for breaking a film and the available turbulent energy of the ocean. The bending energy depends on the film geometry and on the mechanical properties of the weathered polymer. As shown by Brouzet et al <cit.>, for a fiber (1D), the threshold L_c is proportional to the fiber diameter d and varies as L_c= kE^1/4/(ρηϵ)^1/8d where E is the Young modulus of the brittle polymer fiber, ρ and η are the mass density and viscosity of water, ϵ is the mean turbulent dissipation rate and k is a prefactor in the order of 1. In two dimensions, the expression for the threshold L_c is more complex, since it depends both on the width ℓ and thickness h of the film. However, based on 2D mechanics, one can show that the order of magnitude and h-dependency for L_c remain the same as in 1D, while the prefactor slightly varies with ℓ. Reasonable assumptions on film geometry, mechanical properties of weathered brittle plastic and highly turbulent ocean events, such as made by Brouzet et al. <cit.> allow us to evaluate that L_c/h ≈ 100. For films of typical thicknesses lying between 10 and 50 μm, this gives a position of the peak between 1 and 5 mm in good agreement with the field data represented in Fig. <ref>. It is also interesting to discuss the power law exponent value exhibited by both standard and sugar-lump models at large MP sizes. In time-invariant models, the theoretical exponent actually varies with the dimensionality of the considered objects (fibres, films, lumps) ranging from -1 (fibres) to -3 (lumps). As expected, when the objects dimensionality is fixed, the value -2 observed in Fig. <ref> for the sugar-lump model is due to the hypothesis of film-like pieces breaking along their larger dimension only, keeping their thickness constant. In the same way, regarding the laboratory experiments performed on glass fibres <cit.>, the large MP distribution is compatible in the long-time limit with the expected -1 power law [provided that, of course, the depletion of very large objects that originates from the absence of feeding is disregarded.]. Coming back to the field data as displayed in Fig. <ref>, one can note that for large MP all data points collapse onto a single 1/L^2 master curve. This suggests that either most collected waste comprises film-like objects breaking along their larger dimension only, or, perhaps more likely, that one collects a mixture of all three types of objects leading to an “average” exponent, obviously lying somewhere between -1 and -3, that turns out to be close to -2. The total abundance N_tot of fragments (all sizes included) as a function of time is represented in Fig. <ref> for both the sugar lump and standard models. In the latter case, the abundance is simply described by an exponential law: N_tot = [2^t+1-1] N_0 when the ocean is fed by a constant number N_0 of (nearly identical) large fragments per iteration (Eq. <ref>, Section <ref>). The sugar lump model predicts a time evolution which deviates from the standard model prediction: The increase of total abundance slows down with time, due to the hindering of smaller fragments production, and the effect is all the more pronounced for larger threshold parameters L_c, as could have been expected. In the realistic case where L_c/h ≈ 100, the increasing rate of fragments production becomes very small for the largest feeding times, as can be observed in Fig. <ref> which shows that the number of MP would be multiplied every ten years by only a factor 2, compared to a factor of 1000 in the standard model. These theoretical results might explain why no clear temporal trend is observed in the field data <cit.>. §.§ Role of the mesh size on the size distribution and on its temporal evolution If one wants to go further in confronting models to field data, one needs to take into account that the experimental collection of particles in the environment always involves an observation window, and in particular a lower size limit L_mesh, e.g. due to the mesh size of the net used during ocean campaigns. The very existence of a lower limit leads to the appearance of transitory and steady-state regimes for the temporal evolution of the number of collected particles, as will be shown below. In the standard model case, when the feeding and breaking process starts, larger size classes are first filled, while smaller size classes are still empty (Fig. <ref>, Section <ref>). As long as the smaller fragments produced by the breaking process are larger than the lower size limit L_mesh of the collection tool, the number of collected fragments increases with time, de facto producing a transitory regime in the observed total abundance. The size of the smaller fragments reaches L_mesh after a given number of fragmentation steps corresponding to the duration of the transitory regime: t_c≈2ln(L_init/L_mesh)/ln2 where L_init is the initial largest dimension of the plastic fragments released into the ocean. From this time onward, both the size distribution and total number of collected fragments in the observation window no longer change. Even though the production of fragments smaller than L_mesh continues to occur, as well as the continuous feeding of large-scale objects, one therefore observes a steady-state regime. This is illustrated in Fig. <ref> for two different values of the mesh size L_mesh (filled symbols ∙ and ▪). For the sugar lump model case, one needs to also consider the size threshold length scale L_c, below which fragmentation is inhibited. When L_c is much smaller than L_mesh, the threshold length L_c is not in the observation window, hence the analysis is the same as in the standard case. At contrast, when L_c is close to L_mesh or larger, the transitory regime is expected to exhibit two successive time dependencies. This behavior is displayed in Fig. <ref> (open symbols ∘ and □) for the same mesh size values as in the standard model for comparison. At short times, since the smaller fragment size has not reached yet the breaking threshold L_c, the number of collected fragments follows the same law as in the standard case. When the smaller fragments get close to the size L_c, however, the inhibition of their breaking creates an accumulation of fragments around L_c, hence the abundance peak. As a consequence, the increase in the total number of fragments slows down. Since the abundance peak position shifts towards smaller values with time (Fig. <ref>, inset) albeit slowly, a final stationary state should be observed when the abundance peak position becomes significantly smaller than L_mesh. As shown in Fig. <ref>, this occurs within the explored time window for large L_mesh (∘), but the stationary state is not observed for small L_mesh (□), presumably because our simulation has not explored times large enough. When the steady-state regime is reached, the number of fragments above L_mesh, i.e. likely to be collected, remains constant with a value larger than that of the standard model, due to the overshoot induced by the accumulation on the right-hand side of the peak. Let us recall that the characteristic fragmentation time, defined as the typical duration for a piece to break into two, has been evaluated at one year. In the case of the standard model, this means that the size of each fragment is reduced by a factor 30 in about 10 years. Therefore, starting with debris size of the order of a centimeter, small MPs of typical size the mesh size (330 μm in Fig. <ref>) will be obtained within 10 years only. Thus, 10 years correspond to the duration of the transitory regime t_c established in Eq. (<ref>) and the oceans should be by far in the steady-state regime since the pollution started in the 1950's. It is however no longer controversial nowadays that the standard (steady-state) model fails to describe the size distribution of the field data. On the contrary, the sugar lump model predicts the existence of an abundance peak, in agreement with what is observed during collection campaigns. This peak is due to the accumulation of fragments whose size is in the order of the breaking threshold L_c. As discussed in paragraph <ref>, the failure threshold L_c can be soundly estimated to lie between 1 and 5 mm. Comparison with field data then corresponds to the case where L_c is about ten times larger than the mesh size L_mesh. As just shown in Fig. <ref>, this implies a drastic increase of duration of the transitory regime, that can be estimated to be above 100 years. These considerations lead us to the important conclusion that one is still nowadays in the transitory regime. Moreover, the sugar lump model also implies that the total abundance is correctly estimated through field data collection, i.e. that it is not biased by the mesh size. Because the peak position slowly shifts towards smaller sizes, the mesh size will eventually play a role, but at some much later point in time. Finally, let us recall that this paper does not take into account delamination processes, so the previous statement is only true for millimetric debris, that is to say debris produced through fragmentation, and that micrometric size debris might exhibit a completely different behavior, being probably much more numerous. §.§ Constant versus exponential feeding In the results discussed in Section <ref>, it was assumed that the rate of waste feeding in the ocean is constant with time. However, it is common knowledge that the production of plastics has increased significantly since the 1950's. Geyer et al <cit.> have shown that the discarded waste follows the same trend. Data from the above-quoted article has been extracted and fitted in Fig. <ref> and Fig. <ref> with exponential laws N= N_0(1+τ)^t, where τ represents an annual growth rate, of plastic production and discarded waste respectively. For plastic production, the annual growth rate is found about 16% until 1974, the year of the oil crisis, and close to 6% after 1974 with, perhaps, an even further decrease of the rate in the recent years. Not unexpectedly, the same trends are found when considering the discarded waste, with growth rates, respectively, 17% and 5%. In order to discuss now the effects of an increasing waste feeding in the ocean, we inject for simplicity a single exponential with an intermediate rate of 7% in the two models. When comparing this feeding law and the standard fragmentation law [2^t+1-1] N_0, one easily concludes that the total number of plastics items in the ocean is mainly determined by the fragmentation rate, regardless of the feeding rate. In order to verify what happens in the case of the sugar lump model, where the fragmentation process is hindered, the size distributions for both feeding hypotheses are numerically compared in Figs. <ref> and <ref>, respectively after 14 and 40 years. It can be observed that at short times, the size distribution is very little altered by the change in feeding. At longer times, a significant increase of the amount of the largest particles can be observed, while the amount of small particles is increasing much less. Besides, the size position of the abundance peak is almost not shifted. The total amount of fragments is represented in Fig. <ref> for the standard and sugar lump models for the two feeding cases considered. For exponential feeding, the sugar lump model still predicts a significant decrease in the rate of fragment generation over time, whereas one could have thought that exponential feeding could cancel out this slowdown. The conclusions drawn above, Section <ref> therefore remain valid in the more realistic case of an exponential feeding. Finally, one should keep in mind that, if the feeding rate is a reasonable indicator of plastic pollution, since it describes the evolution over time of the total mass of plastics present in the ocean, it is not enough to properly describe plastic pollution. For a given mass, the number–hence size–of particles produced is the major factor in assessing potential impacts. Indeed, the smaller the size the larger the particles number concentration, the larger their specific area hence their adsorption ability and the larger the ensuing eco-toxicity. It is shown here that the mass of waste roughly doubles every 10 years, whereas the number of particles doubles every year, making fragmentation the main factor driving plastic pollution and impacts. A lot of studies are devoted to making a mass balance and understanding the fluxes of plastic waste  <cit.> but, even in the case of a drastic immediate reduction of waste production, plastic pollution and impacts will affect the ocean life for still many years to come, due to fragmentation. § CONCLUSION The generalist model presented here is based on a few sound physical assumptions and sheds new light on global temporal trends in the distribution of microplastics at the surface of the oceans. The model shows that the existence of a physical size threshold below which fragmentation is strongly inhibited, leads to the accumulation of fragments at a given size, in line with what is observed in the field data. In other words, if one does not collect particles in the range 100 μm–1 mm, it is because only a few of them is actually generated by fragmentation at this scale. One would not necessarily need to invoke any other mechanism or bias such as ingestion by living organisms <cit.> or the mesh size of collection nets  <cit.>, to explain the field data for floating debris. As a consequence, the observed distribution does reflect, in our opinion, the real distribution of MPs at the surface of the ocean, down to 100 μm. Besides, the sugar lump model implies a slowdown in the rate of MPs production by fragmentation, due to the fact that fragmentation is inhibited when particles approach the threshold size. This may explain the absence of a clear increase in the MP numbers in different geographical areas <cit.>. observations <cit.>. Two other general facts have been pointed out in this paper: * for large MP, the predicted size distribution follows a power law, whose exponent depends on the dimensionality of the object (-1 for a fibre, -2 for a film and -3 for a lump). It is therefore worth sorting out collected objects according to their geometry, as it is done for instance when fibres are separated from 2D objects <cit.>. It is however interesting to note that, when the objects are not sorted in this way, an “average value” -2 is found for the exponent. * the model takes into account an exponentially-increasing waste feeding rate. We have fitted the plastic production since the 50' and found that there is not one but two exponential laws, the second one, slower than the first one, being visible after the second oil crisis in 1974. Comparing this feeding to the exponential fragmentation ratio, we show that the number of fragments is mainly predicted by the fragmentation process, regardless of the feeding details. To go further and estimate absolute values of MP concentrations in the whole range of sizes, it would be necessary, on one hand, to take into account delamination in order to get small particles distribution. On the other hand, one should also be aware of the spatial heterogeneity of particles concentration and therefore an interesting development could be to combine fragmentation with flow models developed for instance in Refs. <cit.>. § SUPPORTING INFORMATION §.§ Standard model In this model, as pictorially represented in Fig. <ref>, the ocean is fed at each iteration n with a fixed number a_0 of large 2D-like objects, mimicking plastic films. Neglecting size and shape dispersity for convenience, all 0^th-generation objects are assumed to be large square platelets of lateral size L_init and thickness h, with L_init≫ h. Between consecutive iteration steps, fragmentation produces p^th-generation objects, by splitting in two equal parts (p-1)^th-generation objects, thus generating square platelets when p is even, but rectangular platelets with aspect ratio 2:1 for odd p. If size is measured by the diagonal, a p^th-generation object has size √(2)L_init/2^p/2 (even p) or √(5)L_init/2^(p+1)/2 (odd p). With size classes described by the number of p^th-generation objects at iteration step n, C(n,p), the filling law of size classes is: [ C(n,0) = a_0 ; C(n,p) = 0 if p>n; C(n,p) = 2C(n-1,p-1) if 1≤ p≤ n ] The set of equations (<ref>) is readily solved: C(n,p)=2^pa_0 for 0≤ p≤ n, and C(n,p)=0 for p>n. Since size L scales with generation index p as 2^-p/2, the steady-sate scaling for the filling of size classes is C∝ L^-2. The cumulative abundance S_n≡∑_pC(n,p) at iteration step n is also easily obtained: S_n=[2^n+1-1]a_0 and displayed as a dashed line in Figs. <ref> and <ref>. As noticed in Ref. <cit.> where experimental data and model predictions are matched together, the standard model fails for small objects, and this occurs when a (nearly) cubic shape is reached. Since the typical (lateral) size of p^th-generation objects is ≈ L_init/2^p/2, the limit is reached for p_max≈2logL_init/h/log2 that is to say in about 20 generations with the rough estimate L_init/h=10^3. The set of equations describing the size-class filling law has to be altered to take into account this limit. Assuming for simplicity that p_max-generation objects cannot be fragmented anymore (“atomic” fragments), this set of equations becomes: [ C(n,0) = a_0 ; C(n,p) = 0 if p>n or p>p_max; C(n,p) = 2C(n-1,p-1) if 1≤ p<p_max and p ≤ n; C(n,p_max) = C(n-1,p_max)+2C(n-1,p_max-1) if n>p_max ] As shown by the explicit solution, Eq. (<ref>) below, the last line in this set of equations leads to an accumulation of “atomic” fragments (see also Fig. <ref> for a pictorial representation of this feature) [ C(n,p) = 2^pa_0 if 0≤ p≤ n<p_max; C(n,p_max) = (n+1-p_max)2^p_maxa_0 if n≥ p_max; C(n,p) = 0 for other cases ] associated to a significant (exponential to linear) slowing down of the cumulative abundance: S_n=[2^p_max(2+n-p_max)-1]a_0 for iteration steps n≥ p_max. §.§ Standard model with inflation As a first extension of the standard model, inflation in the feeding of the ocean with large 2D-like objects is now considered. Taking simultaneously into account the “atomic” nature of small fragments beyond p_max generations, the size-class filling set of equations (<ref>) has to be replaced by: [ C(n,0) = a_0(1+τ)^n ; C(n,p) = 0 if p>n or p>p_max; C(n,p) = 2C(n-1,p-1) if 1≤ p<p_max and p ≤ n; C(n,p_max) = C(n-1,p_max)+2C(n-1,p_max-1) if n>p_max ] Size classes are now described by C(n,p)=2^p(1+τ)^n-pa_0 for 0≤ p≤ n as long as the generation index p remains smaller than p_max and C(n,p_max)=2^p_max[(1+τ)^n-p_max+1-1]a_0/τ for n≥ p_max. Whereas the filling of the size class associated to “atomic” fragments was linear in n without inflation, it becomes here exponential. Consequently, the cumulative abundance, definitely slowed down, remains exponential in n for n>p_max: S_n={(1+τ)^n[(2/1+τ)^p_max-1/1-τ]+2^p_max(1+τ)^n-p_max+1-1/τ}a_0 As long as the “atomic limit” is not reached, the cumulative abundance exhibits a simpler form, namely: S_n=[2^n+1-(1+τ)^n+1]a_0/1-τ that does not significantly differ from Eq. (<ref>). The time-invariant features of the size distribution are nevertheless modified in two respects (see Fig. <ref>): * Inflation spoils the strict time-invariant feature previously observed for the size distribution N(L); * A (nearly) time-invariant behaviour remains as far as scaling is concerned, since N∝1/L^ν, but ν does depend, albeit rather weakly, on the time index n, while being significantly smaller than 2. Fitting data to a power law, an exponent ν close to 1.8 is obtained for inflation τ=7%. §.§ Sugar lump model Taking inspiration from the standard model, Section <ref>, at each iteration the ocean is fed with large parallelepipedic fragments of length L, width ℓ and thickness h, where h is much smaller than the other two dimensions and length L is, by convention, larger than width ℓ. Some size dispersity is introduced when populating the largest size class, by randomly distributing L in the interval [0.9L_init, L_init], and ℓ in [0.7L_init, 0.9L_init], but h is kept fixed. The number of objects feeding the system can be controlled at each iteration step, and two simple limits have been investigated: Constant, or exponentially-growing feeding rates, mimicking two variants of the Standard model, Sections <ref> and <ref>, respectively. Size-classes evenly sampling (in logarithmic scale) the full range of L/h, [1, L_init/h] are populated by sorting into the proper size class the fragments present in the system. Except for the 0^th, initialisation step, these fragments are either 0^th-generation fragments just introduced into the system, obviously belonging to the largest size class, or g-generation fragments (g≥1) that have been “weathered” during the time step from step n to step n+1 and then split, with a L-dependent efficiency, into two smaller fragments. As explained in Section <ref>, the splitting process, albeit random, explicitly ensures the existence of an “atomic” limit: Fragments belonging to the smallest size class cannot be fragmented any further. As tentatively illustrated in Fig. <ref>, a special feature of the model is that generations (g) and size-class (p) indices have to be distinguished because, at contrast with the standard model, although for a given fragment a “weathering” event (n→ n+1) is always associated to an “ageing” event (g increased by one), it is not always associated to populating one or two lower-size classes (and simultaneously decreasing by 1 the abundance of the considered size-class) because the splitting process is not 100% efficient. Keeping track of abundances in terms of time (n), age (g) and size (p) being computationally demanding for exponentially-growing populations, our simulations have been limited to, at most, n=g=40. The number of distinct size classes has also been limited to 28, as this corresponds to the number of size-classes reported in Ref. <cit.>.
http://arxiv.org/abs/2307.07662v1
20230714235449
MPDIoU: A Loss for Efficient and Accurate Bounding Box Regression
[ "Ma Siliang", "Xu Yong" ]
cs.CV
[ "cs.CV", "cs.AI" ]
SCUT]Siliang Ma SCUT]Yong Xu [SCUT]Institute of Computer Science and Engineering, South China University of Technology, Guangzhou 510000, China Bounding box regression (BBR) has been widely used in object detection and instance segmentation, which is an important step in object localization. However, most of the existing loss functions for bounding box regression cannot be optimized when the predicted box has the same aspect ratio as the groundtruth box, but the width and height values are exactly different. In order to tackle the issues mentioned above, we fully explore the geometric features of horizontal rectangle and propose a novel bounding box similarity comparison metric MPDIoU based on minimum point distance, which contains all of the relevant factors considered in the existing loss functions, namely overlapping or non-overlapping area, central points distance, and deviation of width and height, while simplifying the calculation process. On this basis, we propose a bounding box regression loss function based on MPDIoU, called ℒ_MPDIoU. Experimental results show that the MPDIoU loss function is applied to state-of-the-art instance segmentation (e.g., YOLACT) and object detection (e.g., YOLOv7) model trained on PASCAL VOC, MS COCO, and IIIT5k outperforms existing loss functions. Object detection instance segmentation bounding box regression loss function § INTRODUCTION Object detection and instance segmentation are two important problems of computer vision, which have attracted a large scale of researchers' interests during the past few years. Most of the state-of-the-art object detectors (e.g., YOLO series <cit.>, Mask R-CNN <cit.>, Dynamic R-CNN <cit.> and DETR <cit.>) rely on a bounding box regression (BBR) module to determine the position of objects. Based on this paradigm, a well-designed loss function is of great importance for the success of BBR. So far, most of the existing loss functions for BBR fall into two categories: ℓ_n-norm based loss functions and Intersection over Union (IoU)-based loss functions. However, most of the existing loss functions for bounding box regression have the same value under different prediction results, which decreases the convergence speed and accuracy of bounding box regression. Therefore, considering the advantages and drawbacks of the existing loss functions for bounding box regression, inspired by the geometric features of horizontal rectangle, we try to design a novel loss function ℒ_MPDIoU based on the minimum points distance for bounding box regression, and use MPDIoU as a new measure to compare the similarity between the predicted bounding box and the groundtruth bounding box in the bounding box regression process. We also provide an easy-implemented solution for calculating MPDIoU between two axis-aligned rectangles, allowing it to be used as an evaluation metric to incorporate MPDIoU into state-of-the-art object detection and instance segmentation algorithms, and we test on some of the mainstream object detection, scene text spotting and instance segmentation datasets such as PASCAL VOC <cit.>, MS COCO <cit.>, IIIT5k <cit.> and MTHv2 <cit.> to verify the performance of our proposed MPDIoU. The contribution of this paper can be summarized as below: 1. We considered the advantages and disadvantages of the existing IoU-based losses and ℓ_n-norm losses, and then proposed an IoU loss based on minimum points distance called ℒ_MPDIoU to tackle the issues of existing losses and obtain a faster convergence speed and more accurate regression results. 2. Extensive experiments have been conducted on object detection, character-level scene text spotting and instance segmentation tasks. Outstanding experimental results validate the superiority of the proposed MPDIoU loss. Detailed ablation studies exhibit the effects of different settings of loss functions and parameter values. § RELATED WORK §.§ Object Detection and Instance Segmentation During the past few years, a large number of object detection and instance segmentation methods based on deep learning have been proposed by researchers from different countries and regions. In summary, bounding box regression has been adopted as a basic component in many representative object detection and instance segmentation frameworks <cit.>. In deep models for object detection, R-CNN series <cit.>, <cit.>, <cit.> adopts two or three bounding box regression modules to obtain higher localization accuracy, while YOLO series <cit.> and SSD series <cit.> adopt one to achieve faster inference. RepPoints <cit.> predicts several points to define a rectangular box. FCOS <cit.> locates an object by predicting the Euclidean distances from the sampling points to the top, bottom, left and right sides of the groundtruth bounding box. As for instance segmentation, PolarMask <cit.> predicts the length of n rays from the sampling point to the edge of the object in n directions to segment an instance. There are other detectors, such as RRPN <cit.> and R2CNN <cit.> adding rotation angle regression to detect arbitrary-orientated objects for remote sensing detection and scene text detection. Mask R-CNN <cit.> adds an extra instance mask branch on Faster R-CNN <cit.>, while the recent state-of-the-art YOLACT <cit.> does the same thing on RetinaNet <cit.>. To sum up, bounding box regression is one key component of state-of-the-art deep models for object detection and instance segmentation. §.§ Scene Text Spotting In order to solve the problem of arbitrary shape scene text detection and recognition, ABCNet <cit.> and its improved version ABCNet v2 <cit.> use the BezierAlign to transform the arbitrary-shape texts into regular ones. These methods achieve great progress by using rectification module to unify detection and recognition into end-to-end trainable systems. <cit.> propose RoI Masking to extract the feature for arbitrarily-shaped text recognition. Similar to <cit.> try to use a faster detector for scene text detection. AE TextSpotter <cit.> uses the results of recognition to guide detection through language model. Inspired by <cit.>, <cit.> proposed a scene text spotting method based on transformer, which provides instance-level text segmentation results. §.§ Loss Function for Bounding Box Regression At the very beginning, ℓ_n-norm loss function was widely used for bounding box regression, which was exactly simple but sensitive to various scales. In YOLO v1 <cit.>, square roots for w and h are adopted to mitigate this effect, while YOLO v3 <cit.> uses 2-wh. In order to better calculate the diverse between the groundtruth and the predicted bounding boxes, IoU loss is used since Unitbox <cit.>. To ensure the training stability, Bounded-IoU loss <cit.> introduces the upper bound of IoU. For training deep models in object detection and instance segmentation, IoU-based metrics are suggested to be more consistent than ℓ_n-norm <cit.>. The original IoU represents the ratio of the intersection area and the union area of the predicted bounding box and the groundtruth bounding box (as Figure <ref>(a) shows), which can be formulated as IoU=ℬ_gt⋂ℬ_prd/ℬ_gt⋃ℬ_prd, where ℬ_gt denotes the groundtruth bounding box, ℬ_prd denotes the predicted bounding box. As we can see, the original IoU only calculates the union area of two bounding boxes, which can't distinguish the cases that two boxes do not overlap. As equation <ref> shows, if |ℬ_gt⋂ℬ_prd|=0, then IoU(ℬ_gt,ℬ_prd)=0. In this case, IoU can not reflect whether two boxes are in vicinity of each other or very far from each other. Then, GIoU <cit.> is proposed to tackle this issue. The GIoU can be formulated as GIoU=IoU-|𝒞 -ℬ_gt∪ℬ_prd|/|𝒞|, where 𝒞 is the smallest box covering ℬ_gt and ℬ_prd (as shown in the black dotted box in Figure <ref>(a)), and | C| is the area of box 𝒞. Due to the introduction of the penalty term in GIoU loss, the predicted box will move toward the target box in nonoverlapping cases. GIoU loss has been applied to train state-of-the-art object detectors, such as YOLO v3 and Faster R-CNN, and achieves better performance than MSE loss and IoU loss. However, GIoU will lost effectiveness when the predicted bounding box is absolutely covered by the groundtruth bounding box. In order to deal with this problem, DIoU <cit.> was proposed with consideration of the centroid points distance between the predicted bounding box and the groundtruth bounding box. The formulation of DIoU can be formulated as DIoU=IoU-ρ ^2 (ℬ_gt,ℬ_prd)/𝒞 ^2, where ρ ^2 (ℬ_gt,ℬ_prd) denotes Euclidean distance between the central points of predicted bounding box and groundtruth bounding box (as the red dotted line shown in Figure <ref>(b)). 𝒞 ^2 denotes the diagonal length of the smallest enclosing rectangle (as the black dotted line shown in Figure <ref>(b)). As we can see, the target of ℒ_DIoU directly minimizes the distance between central points of predicted bounding box and groundtruth bounding box. However, when the central point of predicted bounding box coincides with the central point of groundtruth bounding box, it degrades to the original IoU. To address this issue, CIoU was proposed with consideration of both central points distance and the aspect ratio. The formulation of CIoU can be written as follows: CIoU=IoU-ρ ^2 (ℬ_gt,ℬ_prd)/𝒞 ^2-α V, V =4/π ^2(arctanw^gt/h^gt-arctanw^prd/h^prd)^2, α =V/1-IoU+V. However, the definition of aspect ratio from CIoU is relative value rather than absolute value. To address this issue, EIoU <cit.> was proposed based on DIoU, which is defined as follows: EIoU=DIoU-ρ ^2 (w_prd,w_gt)/(w^c) ^2-ρ ^2 (h_prd,h_gt)/(h^c) ^2. However, as Figure <ref> shows, the loss functions mentioned above for bounding box regression will lose effectiveness when the predicted bounding box and the groundtruth bounding box have the same aspect ratio with different width and height values, which will limit the convergence speed and accuracy. Therefore, we try to design a novel loss function called ℒ_MPDIoU for bounding box regression with consideration of the advantages included in ℒ_GIoU <cit.>, ℒ_DIoU <cit.>, ℒ_CIoU <cit.>, ℒ_EIoU <cit.>, but also has higher efficiency and accuracy for bounding box regression. Nonetheless, geometric properties of bounding box regression are actually not fully exploited in existing loss functions. Therefore, we propose MPDIoU loss by minimizing the top-left and bottom-right points distance between the predicted bounding box and the groundtruth bounding box for better training deep models of object detection, character-level scene text spotting and instance segmentation. § INTERSECTION OVER UNION WITH MINIMUM POINTS DISTANCE After analyzing the advantages and disadvantages of the IoU-based loss functions mentioned above, we start to think how to improve the accuracy and efficiency of bounding box regression. Generally speaking, we use the coordinates of top-left and bottom-right points to define a unique rectangle. Inspired by the geometric properties of bounding boxes, we designed a novel IoU-based metric named MPDIoU to minimize the top-left and bottom-right points distance between the predicted bounding box and the groundtruth bounding box directly. The calculation of MPDIoU is summarized in Algorithm <ref>. In summary, our proposed MPDIoU simplifies the similarity comparison between two bounding boxes, which can adapt to overlapping or nonoverlapping bounding box regression. Therefore, MPDIoU can be a proper substitute for IoU in all performance measures used in 2D/3D computer vision tasks. In this paper, we only focus on 2D object detection and instance segmentation where we can easily apply MPDIoU as both metric and loss. The extension to non-axis aligned 3D cases is left as future work. §.§ MPDIoU as Loss for Bounding Box Regression In the training phase, each bounding box ℬ_prd =[x^prd,y^prd,w^prd,h^prd]^T predicted by the model is forced to approach its groundtruth box ℬ_gt = [x^gt,y^gt,w^gt,h^gt]^T by minimizing loss function below: ℒ=Θminℬ _gt∈𝔹_gt∑ℒ(ℬ_gt,ℬ_prd|Θ), where 𝔹_gt is the set of groundtruth boxes, and Θ is the parameter of deep model for regression. A typical form of ℒ is ℓ_n-norm, for example, mean-square error (MSE) loss and Smooth-ℓ_1 loss <cit.>, which have been widely adopted in object detection <cit.>; pedestrian detection <cit.>; scene text spotting <cit.>; 3D object detection <cit.>; pose estimation <cit.>; and instance segmentation <cit.>. However, recent researches suggest that ℓ_n-norm-based loss functions are not consistent with the evaluation metric, that is, interaction over union (IoU), and instead propose IoU-based loss functions <cit.>. Based on the definition of MPDIoU in the previous section, we define the loss function based on MPDIoU as follows: ℒ_MPDIoU=1-MPDIoU As a result, all of the factors of existing loss functions for bounding box regression can be determined by four points coordinates. The conversion formulas are shown as follow: |C|=(max(x_2^gt,x_2^prd)-min(x_1^gt,x_1^prd))*(max(y_2^gt,y_2^prd)-min(y_1^gt,y_1^prd)), x_c^gt=x_1^gt+x_2^gt/2, y_c^gt=y_1^gt+y_2^gt/2, y_c^prd=y_1^prd+y_2^prd/2, x_c^prd=x_1^prd+x_2^prd/2, w_gt=x_2^gt-x_1^gt, h_gt=y_2^gt-y_1^gt, w_prd=x_2^prd-x_1^prd, h_prd=y_2^prd-y_1^prd. where |C| represents the minimum enclosing rectangle's area covering ℬ_gt and ℬ_prd, (x_c^gt,y_c^gt) and (x_c^prd, y_c^prd) represent the coordinates of the central points of the groundtruth bounding box and the predicted bounding box, respectively. w_gt and h_gt represent the width and height of the groundtruth bounding box, w_prd and h_prd represent the width and height of the predicted bounding box. From Eq (<ref>)-(<ref>), we can find that all of the factors considered in the existing loss functions can be determined by the coordinates of the top-left points and the bottom-right points, such as nonoverlapping area, central points distance, deviation of width and height, which means our proposed ℒ_MPDIoU not only considerate, but also simplifies the calculation process. According to Theorem <ref>, if the aspect ratio of the predicted bounding boxes and groundtruth bounding box are the same, the predicted bounding box inner the groundtruth bounding box has lower ℒ_MPDIoU value than the prediction box outer the groundtruth bounding box. This characteristic ensures the accuracy of bounding box regression, which tends to provide the predicted bounding boxes with less redudancy. We define one groundtruth bounding box as ℬ_gt and two predicted bounding boxes as ℬ_prd1 and ℬ_prd2. The width and height of the input image are w and h, respectively. Assume the top-left and bottom-right coordinates of ℬ_gt, ℬ_prd1 and ℬ_prd2 are (x_1^gt,y_1^gt,x_2^gt,y_2^gt), (x_1^prd1,y_1^prd1,x_2^prd1,y_2^prd1) and (x_1^prd2,y_1^prd2,x_2^prd2,y_2^prd2), then the width and height of ℬ_gt, ℬ_prd1 and ℬ_prd2 can be formulated as (w_gt=y_2^gt-y_1^gt, h_gt=x_2^gt-x_1^gt), (w_prd1=y_2^prd1-y_1^prd1, h_prd1=x_2^prd1-x_1^prd1) and (w_prd2=y_2^prd2-y_1^prd2, h_prd2=x_2^prd2-x_1^prd2). If w_prd1=k*w_gt and h_prd1=k*h_gt, w_prd2=1/k*w_gt and h_prd2=1/k*h_gt, where k>1 and k∈ N* The central points of the ℬ_gt, ℬ_prd1 and ℬ_prd2 are all overlap. Then GIoU(ℬ_gt, ℬ_prd1)=GIoU(ℬ_gt, ℬ_prd2), DIoU(ℬ_gt, ℬ_prd1)=DIoU(ℬ_gt, ℬ_prd2), CIoU(ℬ_gt, ℬ_prd1)=CIoU(ℬ_gt, ℬ_prd2), EIoU(ℬ_gt, ℬ_prd1)=EIoU(ℬ_gt, ℬ_prd2), but MPDIoU(ℬ_gt, ℬ_prd1)> MPDIoU(ℬ_gt, ℬ_prd2). ∵ IoU(ℬ_gt, ℬ_prd1) = w_gt*h_gt/w_prd1*h_prd1=w_gt*h_gt/k*w_gt*k*h_gt=1/k^2, IoU(ℬ_gt, ℬ_prd2) = w_prd2*h_prd2/w_gt*h_gt=1/k*w_gt*1/k*h_gt/w_gt*h_gt=1/k^2 ∴ IoU(ℬ_gt, ℬ_prd1)=IoU(ℬ_gt, ℬ_prd2) ∵ The central points of the ℬ_gt, ℬ_prd1 and ℬ_prd2 are all overlap. ∴ GIoU(ℬ_gt, ℬ_prd1)=IoU(ℬ_gt, ℬ_prd1)=1/k^2, GIoU(ℬ_gt, ℬ_prd2)=IoU(ℬ_gt, ℬ_prd2)=1/k^2, DIoU(ℬ_gt, ℬ_prd1)=IoU(ℬ_gt, ℬ_prd1)=1/k^2, DIoU(ℬ_gt, ℬ_prd2)=IoU(ℬ_gt, ℬ_prd2)=1/k^2. ∴ GIoU(ℬ_gt, ℬ_prd1)=GIoU(ℬ_gt, ℬ_prd2), DIoU(ℬ_gt, ℬ_prd1)=DIoU(ℬ_gt, ℬ_prd2). ∵ CIoU(ℬ_gt, ℬ_prd1)=IoU(ℬ_gt, ℬ_prd1)-(4/π ^2(arctanw_gt/h_gt-arctanw^prd1/h^prd1)^2)^2/1-IoU(ℬ_gt, ℬ_prd1)+4/π ^2(arctanw_gt/h_gt-arctanw^prd1/h^prd1)^2=1/k^2-(4/π ^2(arctanw_gt/h_gt-arctank*w_gt/k*h_gt)^2)^2/1-IoU(ℬ_gt, ℬ_prd1)+4/π ^2(arctanw_gt/h_gt-arctank*w_gt/k*h_gt)^2=1/k^2. CIoU(ℬ_gt, ℬ_prd2)=IoU(ℬ_gt, ℬ_prd2)-(4/π ^2(arctanw_gt/h_gt-arctanw^prd2/h^prd2)^2)^2/1-1/k^2+4/π ^2(arctanw_gt/h_gt-arctanw^prd2/h^prd2)^2=1/k^2-(4/π ^2(arctanw_gt/h_gt-arctan1/k*w_gt/1/k*h_gt)^2)^2/1-1/k^2+4/π ^2(arctanw_gt/h_gt-arctan1/k*w_gt/1/k*h_gt)^2=1/k^2. ∴ CIoU(ℬ_gt, ℬ_prd1)=CIoU(ℬ_gt, ℬ_prd2). ∵ EIoU(ℬ_gt, ℬ_prd1)=DIoU(ℬ_gt, ℬ_prd1)-(w_prd1-w_gt)^2/w_prd1^2-(h_prd1-h_gt)^2/h_prd1^2=1/k^2-(k*w_gt-w_gt)^2/k^2*w_gt^2-(k*h_gt-h_gt)^2/k^2*h_gt^2=4*k-2*k^2-1/k^2 EIoU(ℬ_gt, ℬ_prd2)=DIoU(ℬ_gt, ℬ_prd2)-(w_gt-w_prd2)^2/w_gt^2-(h_gt-h_prd2)^2/h_gt^2=1/k^2-(w_gt-1/kw_gt)^2/w_gt^2-(h_gt-1/kh_gt)^2/h_gt^2=4*k-2*k^2-1/k^2. ∴ EIoU(ℬ_gt, ℬ_prd1)=EIoU(ℬ_gt, ℬ_prd2). ∵ MPDIoU(ℬ_gt, ℬ_prd1)=IoU(ℬ_gt, ℬ_prd1)-(x_1^prd1-x_1^gt)^2+(y_1^prd1-y_1^gt)^2+(x_2^prd1-x_2^gt)^2+(y_2^prd1-y_2^gt)^2/w^2+h^2=1/k^2-2*((1/2*k*w_gt-1/2*w_gt)^2+(1/2*k*h_gt-1/2*h_gt)^2)/w^2+h^2, MPDIoU(ℬ_gt, ℬ_prd2)=IoU(ℬ_gt, ℬ_prd2)-(x_1^prd2-x_1^gt)^2+(y_1^prd2-y_1^gt)^2+(x_2^prd2-x_2^gt)^2+(y_2^prd2-y_2^gt)^2/w^2+h^2=1/k^2-2*((1/2*w_gt-1/2k*w_gt)^2+(1/2*h_gt-1/2k*h_gt)^2)/w^2+h^2, ∴ MPDIoU(ℬ_gt, ℬ_prd1)-MPDIoU(ℬ_gt, ℬ_prd2)=1/4*(k-1)^2*(w_gt^2+h_gt^2)-1/4*(1-1/k)^2*(w_gt^2+h_gt^2)=1/4*(w_gt^2+h_gt^2)*((k-1)^2-(1-1/k)^2) ∵ (k-1)^2>(1-1/k)^2 ∴ MPDIoU(ℬ_gt, ℬ_prd1)> MPDIoU(ℬ_gt, ℬ_prd2). Considering the groundtruth bounding box, ℬ_gt is a rectangle with area bigger than zero, i.e. A^gt > 0. Alg. <ref> (1) and the Conditions in Alg. <ref> (6) respectively ensure the predicted area A^prd and intersection area ℐ are non-negative values, i.e. A^prd≥ 0 and ℐ≥ 0, ∀ℬ_prd∈ℝ^4. Therefore union area 𝒰>0 for any predicted bounding box ℬ_prd=(x_1^prd,y_1^prd,x_2^prd,y_2^prd)∈ℝ^4. This ensures that the denominator in IoU cannot be zero for any predicted value of outputs. In addition, for any values of ℬ_prd=(x_1^prd,y_1^prd,x_2^prd,y_2^prd)∈ℝ^4, the union area is always bigger than the intersection area, i.e. 𝒰≥ℐ. As a result, ℒ_MPDIoU is always bounded, i.e. 0≤ℒ_MPDIoU< 3, ∀ℬ_prd∈ℝ^4. ℒ_MPDIoU behaviour when IoU = 0: For MPDIoU loss, we have ℒ_MPDIoU =1-MPDIoU=1+d_1^2/d^2+d_2^2/d^2-IoU. In the case of ℬ_gt and ℬ_prd do not overlap, which means IoU=0, MPDIoU loss can be simplified to ℒ_MPDIoU =1-MPDIoU=1+d_1^2/d^2+d_2^2/d^2. In this case, by minimizing ℒ_MPDIoU, we actually minimize d_1^2/d^2+d_2^2/d^2. This term is a normalized measure between 0 and 1, i.e. 0≤d_1^2/d^2+d_2^2/d^2< 2. § EXPERIMENTAL RESULTS We evaluate our new bounding box regression loss ℒ_MPDIoU by incorporating it into the most popular 2D object detector and instance segmentation models such as YOLO v7 <cit.> and YOLACT <cit.>. To this end, we replace their default regression losses with ℒ_MPDIoU , i.e. we replace ℓ_1-smooth in YOLACT <cit.> and ℒ_CIoU in YOLO v7 <cit.>. We also compare the baseline losses against ℒ_GIoU. §.§ Experimental Settings The experimental environment can be summarized as follows: the memory is 32GB, the operating system is windows 11, the CPU is Intel i9-12900k, and the graphics card is NVIDIA Geforce RTX 3090 with 24GB memory. In order to conduct a fair comparison, all of the experiments are implemented with PyTorch <cit.>. §.§ Datasets We train all object detection and instance segmentation baselines and report all the results on two standard benchmarks, i.e. the PASCAL VOC <cit.> and the Microsoft Common Objects in Context (MS COCO 2017) <cit.> challenges. The details of their training protocol and their evaluation will be explained in their own sections. PASCAL VOC 2007&2012: The Pascal Visual Object Classes (VOC) <cit.> benchmark is one of the most widely used datasets for classification, object detection and semantic segmentation, which contains about 9963 images. The training dataset and the test dataset are 50% for each, where objects from 20 pre-defined categories are annotated with horizontal bounding boxes. Due to the small scale of images for instance segmentation, which leads to weak performance, we only provide the instance segmentation results training with MS COCO 2017. MS COCO: MS COCO <cit.> is a widely used benchmark for image captioning, object detection and instance segmentation, which contains more than 200,000 images across train, validation and test sets with over 500,000 annotated object instances from 80 categories. IIIT5k: IIIT5k <cit.> is one of the popular scene text spotting benchmark with character-level annotations, which contains 5,000 cropped word images collected from the Internet. The character category includes English letters and digits. There are 2,000 images for training and 3,000 images for testing. MTHv2: MTHv2 <cit.> is one of the popular OCR benchmark with character-level annotations. The character category includes simplified and traditional characters. It contains more than 3000 images of Chinese historical documents and more than 1 million Chinese characters. §.§ Evaluation Protocol In this paper, we used the same performance measure as the MS COCO 2018 Challenge <cit.> to report all of our results, including mean Average Precision (mAP) over different class labels for a specific value of IoU threshold in order to determine true positives and false positives. The main performance measure of object detection used in our experiments is shown by precision and [email protected]:0.95. We report the mAP value for IoU thresholds equal to 0.75, shown as AP75 in the tables. As for instance segmentation, the main performance measure used in our experiments are shown by AP and AR, which is averaging mAP and mAR across different value of IoU thresholds, i.e. IoU = { .5, .55,..., .95}. All of the object detection and instance segmentation baselines have also been evaluated using the test set of the MS COCO 2017 and PASCAL VOC 2007&2012. The results will be shown in following section. §.§ Experimental Results of Object Detection Training protocol. We used the original Darknet implementation of YOLO v7 released by <cit.>. As for baseline results (training using GIoU loss), we selected DarkNet-608 as backbone in all experiments and followed exactly their training protocol using the reported default parameters and the number of iteration on each benchmark. To train YOLO v7 using GIoU, DIoU, CIoU, EIoU and MPDIoU losses, we simply replace the bounding box regression IoU loss with ℒ_GIoU, ℒ_DIoU, ℒ_CIoU, ℒ_EIoU and ℒ_MPDIoU losses explained in <ref>. Following the original code's training protocol, we trained YOLOv7 <cit.> using each loss on both training and validation set of the dataset up to 150 epochs. We set the patience of early stop mechanism as 5 to reduce the training time and save the model with the best performance. Their performance using the best checkpoints for each loss has been evaluated on the test set of PASCAL VOC 2007&2012. The results have been reported in Table <ref>. §.§ Experimental Results of Character-level Scene Text Spotting Training protocol. We used the similar training protocol with the experiments of object detection. Following the original code's training protocol, we trained YOLOv7 <cit.> using each loss on both training and validation set of the dataset up to 30 epochs. Their performance using the best checkpoints for each loss has been evaluated using the test set of IIIT5K <cit.> and MTHv2 <cit.>. The results have been reported in Table <ref> and Table <ref>. 0.45 LossEvaluation AP AP75 ℒ_GIoU 42.9 45 ℒ_DIoU 42.2 42.3 Relative improv(%) -1.6 -6 ℒ_CIoU 44.1 46.6 Relative improv(%) 2.7 3.5 ℒ_EIoU 41 42.6 Relative improv(%) -4.4 -5.3 ℒ_MPDIoU 44.5 46.6 Relative improv(%) 3.7 3.5 tableComparison between the performance of YOLO v7 <cit.> trained using its own loss (ℒ_CIoU) as well as ℒ_GIoU, ℒ_DIoU, ℒ_EIoU and ℒ_MPDIoU losses. The results are reported on the test set of IIIT5K. 0.45 LossEvaluation AP AP75 ℒ_GIoU 52.1 55.3 ℒ_DIoU 53.2 55.8 Relative improv(%) 2.1 0.9 ℒ_CIoU 52.3 53.6 Relative improv(%) 0.3 -3.0 ℒ_EIoU 53.2 54.7 Relative improv(%) 2.1 -1.0 ℒ_MPDIoU 54.5 58 Relative improv(%) 4.6 4.8 tableComparison between the performance of YOLO v7 <cit.> trained using its own loss (ℒ_CIoU) as well as ℒ_GIoU, ℒ_DIoU, ℒ_EIoU and ℒ_MPDIoU losses. The results are reported on the test set of MTHv2. As we can see, the results in Tab. <ref> and <ref> show that training YOLO v7 using ℒ_MPDIoU as regression loss can considerably improve its performance compared to the existing regression losses including ℒ_GIoU, ℒ_DIoU, ℒ_CIoU, ℒ_EIoU. Our proposed ℒ_MPDIoU shows outstanding performance on character-level scene text spotting. §.§ Experimental Results of Instance Segmentation Training protocol. We used the latest PyTorch implementations of YOLACT <cit.>, released by University of California. For baseline results (trained using ℒ_GIoU), we selected ResNet-50 as the backbone network architecture for both YOLACT in all experiments and followed their training protocol using the reported default parameters and the number of iteration on each benchmark. To train YOLACT using GIoU, DIoU, CIoU, EIoU and MPDIoU losses, we replaced their ℓ_1-smooth loss in the final bounding box refinement stage with ℒ_GIoU, ℒ_DIoU, ℒ_CIoU, ℒ_EIoU and ℒ_MPDIoU losses explained in <ref>. Similar with the YOLO v7 experiment, we replaced the original losses for bounding box regression with our proposed ℒ_MPDIoU. As Figure <ref>(c) shows, incorporating ℒ_GIoU, ℒ_DIoU, ℒ_CIoU and ℒ_EIoU as the regression loss can slightly improve the performance of YOLACT on MS COCO 2017. However, the improvement is obvious compared to the case where it is trained using ℒ_MPDIoU, where we visualized different values of mask AP against different value of IoU thresholds, i.e. 0.5≤ IoU≤ 0.95. Similar to the above experiments, detection accuracy improves by using ℒ_MPDIoU as regression loss over the existing loss functions. As Table <ref> shows, our proposed ℒ_MPDIoU performs better than existing loss functions on most of the metrics. However, the amount of improvement between different losses is less than previous experiments. This may be due to several factors. First, the detection anchor boxes on YOLACT <cit.> are more dense than YOLO v7 <cit.>, resulting in less frequent scenarios where ℒ_MPDIoU has an advantage over ℒ_IoU such as nonoverlapping bounding boxes. Second, the existing loss functions for bounding box regression have been improved during the past few years, which means the accuracy improvement is very limit, but there are still large room for the efficiency improvement. We also compared the trend of bbox loss and AP value during the training period of YOLACT with different regression loss functions. As Figure <ref>(a),(b) shows, training with ℒ_MPDIoU performs better than most of the existing loss functions, i.e. ℒ_GIoU, ℒ_DIoU, which achieve higher accuracy and faster convergence. Although the bbox loss and AP value show great fluctuation, our proposed ℒ_MPDIoU performs better at the end of training. In order to better reveal the performance of different loss functions for bounding box regression of instance segmentation, we provide some of the visualization results as Figure <ref> and <ref> shows. As we can see, we provide the instance segmentation results with less redudancy and higher accuracy based on ℒ_MPDIoU other than ℒ_GIoU, ℒ_DIoU, ℒ_CIoU and ℒ_EIoU. § CONCLUSION In this paper, we introduced a new metric named MPDIoU based on minimum points distance for comparing any two arbitrary bounding boxes. We proved that this new metric has all of the appealing properties which existing IoU-based metrics have while simplifing its calculation. It will be a better choice in all performance measures in 2D/3D vision tasks relying on the IoU metric. We also proposed a loss function called ℒ_MPDIoU for bounding box regression. We improved their performance on popular object detection, scene text spotting and instance segmentation benchmarks such as PASCAL VOC, MS COCO, MTHv2 and IIIT5K using both the commonly used performance measures and also our proposed MPDIoU by applying it into the state-of-the-art object detection and instance segmentation algorithms. Since the optimal loss for a metric is the metric itself, our MPDIoU loss can be used as the optimal bounding box regression loss in all applications which require 2D bounding box regression. As for future work, we would like to conduct further experiments on some downstream tasks based on object detection and instance segmentation, including scene text spotting, person re-identification and so on. With the above experiments, we can further verify the generalization ability of our proposed loss functions. elsarticle-num
http://arxiv.org/abs/2307.04651v1
20230710154937
Joint Salient Object Detection and Camouflaged Object Detection via Uncertainty-aware Learning
[ "Aixuan Li", "Jing Zhang", "Yunqiu Lv", "Tong Zhang", "Yiran Zhong", "Mingyi He", "Yuchao Dai" ]
cs.CV
[ "cs.CV" ]
Salient objects attract human attention and usually stand out clearly from their surroundings. In contrast, camouflaged objects share similar colors or textures with the environment. In this case, salient objects are typically non-camouflaged, and camouflaged objects are usually not salient. Due to this inherent contradictory attribute, we introduce an uncertainty-aware learning pipeline to extensively explore the contradictory information of salient object detection (SOD) and camouflaged object detection (COD) via data-level and task-wise contradiction modeling. We first exploit the dataset correlation of these two tasks and claim that the easy samples in the COD dataset can serve as hard samples for SOD to improve the robustness of the SOD model. Based on the assumption that these two models should lead to activation maps highlighting different regions of the same input image, we further introduce a contrastive module with a joint-task contrastive learning framework to explicitly model the contradictory attributes of these two tasks. Different from conventional intra-task contrastive learning for unsupervised representation learning, our contrastive module is designed to model the task-wise correlation, leading to cross-task representation learning. To better understand the two tasks from the perspective of uncertainty, we extensively investigate the uncertainty estimation techniques for modeling the main uncertainties of the two tasks, namely task uncertainty (for SOD) and data uncertainty (for COD), and aiming to effectively estimate the challenging regions for each task to achieve difficulty-aware learning. Experimental results on benchmark datasets demonstrate that our solution leads to both state-of-the-art performance and informative uncertainty estimation. Salient Object Detection, Camouflaged Object Detection, Task Uncertainty, Data Uncertainty, Difficulty-aware Learning Joint Salient Object Detection and Camouflaged Object Detection via Uncertainty-aware Learning Aixuan Li,  Jing Zhang*,  Yunqiu Lv,  Tong Zhang,  Yiran Zhong,  Mingyi He,  Yuchao Dai*  A. Li, Y. Lv, M. He and Y. Dai are with School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China and Shaanxi Key Laboratory of Information Acquisition and Processing. J. Zhang is with School of Computing, the Australian National University, Canberra, Australia. T. Zhang is with IVRL, EPFL, Switzerland. Y. Zhong is with Shanghai AI Laboratory, Shanghai, China. A preliminary version of this work appeared at <cit.>. Our code and data are available at: <https://npucvr.github.io/UJSCOD/>. A. Li and J. Zhang contributed equally. Corresponding authors: Y. Dai ([email protected]) and J. Zhang ([email protected]). This research was supported in part by National Natural Science Foundation of China (62271410) and by the Fundamental Research Funds for the Central Universities. August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Visual salient object detection (SOD) aims to localize the salient object(s) of the image that attract human attention. The early work of saliency detection mainly relies on human visual priors based handcrafted features <cit.> to detect high contrast regions. Deep SOD models <cit.> use deep saliency features instead of handcrafted features to achieve effective global and local context modeling, leading to better performance. In general, existing SOD models <cit.> focus on two directions: 1) constructing effective saliency decoders <cit.> that facilitate high/low-level feature aggregation; and 2) designing appropriate loss functions <cit.> to achieve structure-preserving saliency detection. Unlike salient objects that immediately attract human attention, camouflaged objects evolve to blend into their surroundings, effectively avoiding detection by predators. The concept of camouflage has a long history <cit.>, and finds application in various domains including biology <cit.>, military <cit.> and other fields <cit.>. From a biological evolution perspective, prey species have developed adaptive mechanisms to camouflage themselves within their environment <cit.>, often by mimicking the structure or texture of their surroundings. These camouflaged objects can only be distinguished by subtle differences. Consequently, camouflaged object detection (COD) models <cit.> are designed to identify and localize these "subtle" differences, enabling the comprehensive detection of camouflaged objects. To address the contradictory nature of SOD and COD, we propose a joint-task learning framework that explores the relationship between these two tasks. Our investigation reveals an inverse relationship between saliency and camouflage, where a higher level of saliency typically indicates a lower level of camouflage, and vice versa. This oppositeness is clearly demonstrated in Fig. <ref>, where the object gradually transits from camouflaged to salient as the contrast level increases. Hence, we explore the correlation of SOD and COD from both data-wise and task-wise perspectives. For data-wise correlation modeling, we re-interpret the data augmentation by defining easy samples from COD as hard samples for SOD. By doing so, we achieve contradiction modeling from the dataset perspective. Fig. <ref> illustrates that typical camouflaged objects are never salient, but samples in the middle can be defined as hard samples for SOD. Thus, we achieve context-aware data augmentation by the proposed data interaction as data augmentation method. In addition, for COD, we find the performance is sensitive to the size of camouflaged objects. To explain this, we crop the foreground camouflaged objects with different percentages of background, and show their corresponding prediction maps and uncertainty maps in Fig. <ref>. We observe that the cropping based prediction uncertainty,  variance of multiple predictions, is relatively consistent with region-level detectability of the camouflaged objects, validating that performance of the model can be influenced by the complexity of the background. The foreground-cropping strategy can serve as an effective data augmentation technique and a promising uncertainty generation strategy for COD, which also simulates real-world scenarios that camouflaged objects in the wild may appear in different environments. We have also investigated the foreground cropping strategy for SOD, and observed relatively stable predictions, thus the foreground cropping is only applied to COD training dataset. Aside from data augmentation, we integrate contrastive learning into our framework to address task-wise contradiction modeling. Conventional contrastive learning typically constructs their positive/negative pairs based on semantic invariance. However, since both SOD and COD are class-agnostic tasks that rely on contrast-based object identification, we adopt a different approach for selecting positive/negative pairs based on region contrast. Specifically, given the same input image and its corresponding detected regions for the two tasks, we define region features with similar contrast as positive pairs, while features with different contrast serve as negative pairs. This contrastive module is designed to cater to class-agnostic tasks and effectively captures the contrast differences between the foreground objects in both tasks. Additionally, we observe two types of uncertainty for SOD and COD, respectively, as depicted in Fig. <ref>. For SOD, the subjective nature <cit.> and the prediction uncertainty due to themajority voting mechanism in labeling procedure, which we define as task uncertainty. On the other hand, in COD, uncertainty arises from the difficulty of accurately annotating camouflaged objects due to their resemblance to the background, which we refer to as data uncertainty. To address these uncertainties, as shown in the fifth column of Fig. <ref>, we extensively investigate uncertainty estimation techniques to achieve two main benefits: (1) a self-explanatory model that is aware of its prediction, with an additional uncertainty map to explain the model's confidence, and (2) difficulty-aware learning, where the estimated uncertainty map serves as an indicator for pixel-wise difficulty representation, facilitating practical hard negative mining. A preliminary version of our work appeared at <cit.>. Compared with the previous version, we have made the following extensions: 1): We have fully analyzed the relationship between SOD and COD from both dataset and task connection perspectives to further build their relationships. 2): To further investigate the cross-task correlations from the contrast perspective, we have introduced contrastive learning to our dual-task learning framework. 3): As an adversarial training based framework, we have investigated more training strategies for the discriminator, leading to more stable training. 4): We have conducted additional experiments to fully explain the task connections, the uncertainty estimation techniques, the experiment setting, and the hyper-parameters. Our main contributions are summarized as: * We propose that salient object detection and camouflaged object detection are tasks with opposing attributes for the first time and introduce the first joint learning framework which utilizes category-agnostic contrastive module to model the contradictory attributes of two tasks. * Based on the transitional nature between saliency and camouflage, we introduce data interaction as data augmentation by defining simple COD samples as hard SOD samples to achieve context-aware data augmentation for SOD. * We analyze the main sources of uncertainty in SOD and COD annotations. In order to achieve reliable model predictions, we propose an uncertainty-aware learning module as an indicator of model prediction confidence. * Considering the inherent differences between COD and SOD tasks, we propose random sampling-based foreground-cropping as the COD data augmentation technique to simulate the real-world scenarios of camouflaged objects, which significantly improves the performance. § RELATED WORK Salient Object Detection. Existing deep saliency detection models <cit.> are mainly designed to achieve structure-preserving saliency predictions. <cit.> introduced an auxiliary edge detection branch to produce a saliency map with precise structure information. Wei  <cit.> presented structure-aware loss function to penalize prediction along object edges. Wu  <cit.> designed a cascade partial decoder to achieve accurate saliency detection with finer detailed information. Feng  <cit.> proposed a boundary-aware mechanism to improve the accuracy of network prediction on the boundary. There also exist salient object detection models that benefit from data of other sources. <cit.> integrated fixation prediction and salient object detection in a unified framework to explore the connections of the two related tasks. Zeng  <cit.> presented to jointly learn a weakly supervised semantic segmentation and fully supervised salient object detection model to benefit from both tasks. Zhang  <cit.> used two refinement structures, combining expanded field of perception and dilated convolution, to increase structural detail without consuming significant computational resources, which are used for salient object detection task on high-resolution images. Liu  <cit.> designed the stereoscopically attentive multi-scale module to ensure the effectiveness of the lightweight salient object detection model, which uses a soft attention mechanism in any channel at any position, ensuring the presence of multiple scales and reducing the number of parameters. Camouflaged Object Detection. The concept of camouflage is usually associated with context <cit.>, and the camouflaged object detection models are designed to discover the camouflaged object(s) hidden in the environment. Cuthill  <cit.> concluded that an effective camouflage includes two mechanisms: background pattern matching, where the color is similar to the environment, and disruptive coloration, which usually involves bright colors along edge, and makes the boundary between camouflaged objects and the background unnoticeable. Bhajantri  <cit.> utilized co-occurrence matrix to detect defective. Pike  <cit.> combined several salient visual features to quantify camouflage, which could simulate the visual mechanism of a predator. Le  <cit.> fused a classification network with a segmentation network and used the classification network to determine the likelihood that the image contains camouflaged objects to produce more accurate camouflaged object detection. In the field of deep learning, Fan  <cit.> proposed the first publicly available camouflage deep network with the largest camouflaged object training set. Mei  <cit.> incorporated the predation mechanism of organisms into the camouflaged object detection model and proposed a distraction mining strategy. Zhai  <cit.> introduced a joint learning model for COD and edge detection based on graph networks, where the two modules simultaneously mine complementary information. Lv  <cit.> presented a triple-task learning framework to simultaneously rank, localize and segment the camouflaged objects. Multi-task Learning. The basic assumption of multi-task learning is that there exists shared information among different tasks. In this way, multi-task learning is widely used to extract complementary information about positively related tasks. Kalogeiton  <cit.> jointly detected objects and actions in a video scene. Zhen  <cit.> designed a joint semantic segmentation and boundary detection framework by iteratively fusing feature maps generated for each task with a pyramid context module. In order to solve the problem of insufficient supervision in semantic alignment and object landmark detection, Jeon  <cit.> designed a joint loss function to impose constraints between tasks, and only reliable matched pairs were used to improve the model robustness with weak supervision. Joung  <cit.> solved the problem of object viewpoint changes in 3D object detection and viewpoint estimation with a cylindrical convolutional network, which obtains view-specific features with structural information at each viewpoint for both two tasks. Luo  <cit.> presented a multi-task framework for referring expression comprehension and segmentation. Uncertainty-aware Learning. Difficulty-aware (or uncertainty-aware, confidence-aware) learning aims to explore the contribution of hard samples, leading to hard-negative mining <cit.>, which has been widely used in medical image segmentation <cit.>, semantic segmentation <cit.>, and other fields <cit.>. To achieve difficulty-aware learning, one needs to estimate model confidence. To achieve this, Gal  <cit.> used Monte Carlo dropout (MC-Dropout) as a Bayesian approximation, where model uncertainty can be obtained with dropout neural networks. Deep Ensemble <cit.> is another popular type of uncertainty modeling technique, which usually involves generating an ensemble of predictions to obtain variance of predictions as the uncertainty estimation. With extra latent variable involved, the latent variable models <cit.> can also be used to achieve predictive distribution estimation, leading to uncertainty modeling. Following the uncertainty-aware learning pipeline, Lin  <cit.> introduced focal loss to balance the contribution of simple and hard samples for loss updating. Li  <cit.> presented a deep layer cascade model for semantic segmentation to pay more attention to the difficult parts. Nie  <cit.> adopted adversarial learning to generate confidence levels for predicting segmentation maps, and then used the generated confidence levels to achieve difficulty-aware learning. Xie  <cit.> applied difficulty-aware learning to an active learning task, where the difficult samples are claimed to be more informative. Contrastive learning. The initial goal of contrastive learning <cit.> is to achieve effective feature representation via self-supervised learning. The main strategy to achieve this is through constructing positive/negative pairs via data augmentation techniques <cit.>, where the basic principle is that similar concepts should have similar representation, thus stay close to each other in the embedding space. On the contrary, dissimilar concepts should stay apart in the embedding space. Different from augmentation based self-supervised contrastive learning, supervised contrastive learning builds the positive/negative pairs based on the given labels <cit.>. Especially for image segmentation, the widely used loss function is cross-entropy loss. However, it's well known that cross-entropy loss is not robust to labeling noise <cit.> and the produced category margins are not separable enough for better generalizing. Further, it penalizes pixel-wise predictions independently without modeling the cross-pixel relationships. Supervised contrastive learning <cit.> can fix the above issues with robust feature embedding exploration, following the similar training pipeline as self-supervised contrastive learning. § OUR METHOD We propose an uncertainty-aware joint learning framework via contrastive learning (see Fig. <ref>) to learn SOD and COD in a unified framework. Firstly, we explain that these two tasks are both contradictory and closely related (Sec. <ref>), and a joint learning pipeline can benefit each other with effective context modeling. Then, we present a Contrastive Module to explicitly model the contradicting attributes of these two tasks (Sec. <ref>), with a data-interaction technique to achieve context-level data augmentation. Further, considering uncertainty for both tasks, we introduce a difficulty-aware learning network (Sec. <ref>) to produce predictions with corresponding uncertainty maps, representing the model's awareness of the predictions. §.§ Tasks Analysis §.§.§ Tasks Relationship Exploration Model Perspective: At the task level, both SOD and COD are class-agnostic binary segmentation tasks, where a UNet <cit.> structure is usually designed to achieve mapping from input (image) space to output (segmentation) space. Differently, the foreground of SOD usually stands out highly from the context, while camouflaged instances are evolved to conceal in the environment. With the above understanding about both SOD and COD, we observe complementary information between the two tasks. Given the same image, we claim that due to the contradicting attributes of saliency and camouflage, the extracted features for each task should be different from each other, and the localized region of each task should be different as well. Dataset Perspective: At the dataset level, we observe some samples within the COD dataset can also be included in the SOD dataset (see Fig. <ref>), where the camouflaged region is consistent with the salient region. However, due to the similar appearance of foreground and background, these samples are easy for COD but challenging for SOD, making them effective for serving as hard samples for SOD to achieve hard negative mining. On the other side, most of the salient foreground in the SOD dataset has high contrast, and the camouflaged regions of the same image usually differ from the salient regions. In this way, samples in the SOD dataset usually cannot serve as simple samples for COD. Considering the dataset relationships of both tasks, we claim that easy samples in the COD dataset can effectively serve as hard samples for SOD to achieve context-level data augmentation. §.§.§ Inherent Uncertainty Subjective Nature of SOD: To reflect the human visual system, the initial saliency annotation of each image is obtained with multiple annotators <cit.>, and then majority voting is performed to generate the final ground truth saliency map that represents the majority salient regions,  the DUTS dataset <cit.>, ECSSD <cit.>, DUT <cit.> dataset are annotated by five annotators and HKU-IS <cit.> is annotated by three annotators. Further, to maintain consistency of the annotated data, some SOD datasets adopt the pre-selection strategy, where the images contain no common salient regions across all the annotators will be removed before the labeling process,  HKU-IS <cit.> dataset first evaluates the consistency of the annotation of the three annotators, and removes the images with greater disagreement. In the end, 4,447 images are obtained from an initial dataset with 7,320 images. We argue that both the majority voting process for final label generation and the pre-selection process for candidate dataset preparation introduce bias to both the dataset and the models trained on it. We explain this as the subjective nature of saliency. Labeling Uncertainty of COD: Camouflaged objects are evolved to have similar texture and color information to their surroundings <cit.>. Due to the similar appearance of camouflaged objects and their habitats, it's more difficult to accurately annotate the camouflaged instance than generic object segmentation, especially along instance boundaries. This poses severe and inevitable labeling noise while generating the camouflaged object detection dataset, which we define as labeling uncertainty of camouflage. §.§ Joint-task Contrastive Learning As a joint learning framework, we have two sets of training dataset for each individual task, namely a SOD dataset D_s={x_i^s,y_i^s}_i=1^N_s for SOD and a COD dataset D_c={x_i^c,y_i^c}_i=1^N_c for COD, where {x_i^s,y_i^s} is the SOD image/ground truth pair and {x_i^c,y_i^c} is the COD image/ground truth pair, and i indexes images, N_s and N_c are the size of training dataset for each task. Motivated by both the task contradiction and data sharing attributes of the two tasks, we introduce a contrastive learning based joint-task learning pipeline for joint salient object detection and camouflaged object detection. Firstly, we model the task contradiction (Section <ref>) with a contrastive module. Secondly, we select easy samples by weighted MAE from the COD training dataset (Section <ref>), serving as hard samples for SOD. §.§.§ Task Correlation Modeling via Contrastive Learning To model the task-wise correlation, we design a Contrastive Module in Fig. <ref> and introduce another set of images from the PASCAL VOC 2007 dataset <cit.> as connection modeling dataset D_p={x_i^p}_i=1^N_p, from which we extract both the camouflaged features and the salient features. With the three datasets (SOD dataset D_s, COD dataset D_c and connection modeling dataset D_p), our contradicting modeling framework uses the Feature Encoder module to extract both the camouflage feature and the saliency feature. The Prediction Decoder is then used to produce the prediction of each task. We further present a Contrastive Module to model the connection of the two tasks with the connection modeling dataset. Feature Encoder: The Feature Encoder takes the RGB image (x^s or x^c) as input to produce task-specific predictions and also serves as the feature extractor for the Contrastive Module. We design both the saliency encoder E_α_s and camouflage encoder E_α_c with the same backbone network,  the ResNet50 <cit.>, where α_s and α_c are the corresponding network parameter sets. The ResNet50 backbone network has four groups[We define feature maps of the same spatial size as same group.] of convolutional layers of channel size 256, 512, 1024 and 2048 respectively. We then define the output features of both encoders as F_α_s={f^s_k}_k=1^4 and F_α_c={f^c_k}_k=1^4, where k indexes the feature group. Prediction Decoder: As shown in Fig. <ref>, we design a shared decoder structure for our joint learning framework. To reduce the computational burden, also to achieve feature with larger receptive field, we first attach a multi-scale dilated convolution <cit.> of output channel size C=32 to each backbone feature to generate the new backbone features F'_α_s={f^cs_k}_k=1^4 and F'_α_c={f^cc_k}_k=1^4 for each specific task from F_α_s and F_α_c. Then, we adopt the residual attention based feature fusion strategy from <cit.> to achieve high/low level feature aggregation. Specifically, the lower-level features are fed to a residual connection module <cit.> with two 3× 3 convolutional layers, which is then added to the higher level feature. The sum of the high/low level feature is then fed to another residual connection block of the same structure as above to generate the fused feature. We perform the above feature fusion operation until we reach the lowest level feature,  f^cc_1 or f^cs_1. To generate the prediction for each task, we design a classifier module, which is composed of three cascaded convolutional layers, where the kernel size of the first two convolutional layers is 3× 3, and that of the last convolutional layer is 1× 1. After generating initial predictions, we used the holistic attention module <cit.> for feature optimization to obtain further improved predictions, as the final predictions. To simplify the explanation, we only use prediction after the holistic attention module as the decoder output. We then define prediction of each task as: G_β(F_α_s) for SOD and G_β(F_α_c) for COD, where β represents the parameter set of the shared prediction decoder. Contrastive Module: The Contrastive Module 𝐶𝑡𝑟𝑠_θ aims to enhance the identity of each task with the feature of other tasks as guidance. Specifically, it takes image x^p from the connection modeling dataset D_p={x_i^p}_i=1^N_p as input to model the feature correlation of SOD and COD, where θ is parameter set of the contrastive module. For image x^p from the connection modeling dataset, its saliency and camouflage features are F^p_α_s={f^p_sk}_k=1^4 and F^p_α_c={f^p_ck}_k=1^4, respectively. With the shared decoder G_β, the prediction map are G_β(F^p_α_s) indicating the saliency map and G_β(F^p_α_c) as the camouflage map. The contrastive module decides positive/negative pairs based on contrast information, where regions of similar contrast are defined as positive pairs and the different contrast regions are defined as negative pairs. The intuition behind this is that COD and SOD are both contrast based class-agnostic binary segmentation tasks, making conventional category-aware contrastive learning infeasible to work in this scenario. Considering the goal of building the positive/negative pairs for contrastive learning is to learn representative features via exploring the inherent data correlation,  the category information, we argue the inherent correlation in our scenario is the contrast information. For SOD, the foreground shows higher contrast compared with the background, indicating the different contrast level. For COD, the contrast levels of foreground and background are similar. Thus given the same input image x^p, we decide positive/negative pairs based on the contrast information of the activated regions. In Fig. <ref>, we show the activation region (the processed predictions) of the same image from both the saliency encoder (first row) and camouflage encoder (second row). Specifically, given same image x^p, we compute its camouflage map and saliency map, and highlight the detected foreground region in red. Fig. <ref> shows that the two encoders focus on different regions of the image, where the saliency encoder pays more attention to the region that stands out from the context. The camouflage encoder focuses more on the hidden object with similar color or structure as the background, which is consistent with our assumption that these two tasks are contradicting with each other in general. Feature definition: Following the conventional practice of contrastive learning, our contrastive module Ctrs_θ maps image features,  F^p_α_s and F^p_α_c for the connection modeling data x^p, to the lower dimensional feature space via four spectral normed convolutional layers (SNconv) <cit.>, which is proven effective in preserving the geometric distance in the compressed space. We then compute saliency and camouflage features of the same image: F^p_sf =S(G_β(F^p_α_s),Ctrs_θ(F^p_α_s)), F^p_sb =S((1-G_β(F^p_α_s)),Ctrs_θ(F^p_α_s)), F^p_𝑐𝑓 =S(G_β(F^p_α_c),Ctrs_θ(F^p_α_c)), F^p_cb =S((1-G_β(F^p_α_c)),Ctrs_θ(F^p_α_c)), where S(·,·) computes the region feature via matrix multiplication <cit.>, where the feature maps,  Ctrs_θ(F^p_α_s), are scaled to be the same spatial size as the activation map,  G_β(F^p_α_s). F^p_sf∈ℝ^1× C and F^p_sb∈ℝ^1× C in Eq. (<ref>) represent the SOD foreground and background features, and F^p_𝑐𝑓 and F^p_cb are the COD foreground and background features, respectively. Positive/negative pair construction: According to our previous discussion, we define three sets of positive pairs based on contrast similarity: (1) The SOD background feature and COD background feature of the same image should be highly similar, indicating similar contrast information; (2) Due to the nature of the camouflaged object, the foreground and the background features of COD are similar as well as camouflaged object shares similar contrast with the background; (3) Similarly, the COD foreground feature and SOD background feature are also similar in contrast. On the other hand, the negative pair consists of SOD foreground feature and background feature. Contrastive loss: Given the positive/negative pairs, we follow <cit.> and define the contrastive loss as: ℒ_ctrs=-log∑_pos/∑_pos+exp(c(F^p_sf,F^p_sb)), where c(· ) measures the cosine similarity of the normalized vectors. ∑_pos represents the similarity of positive pairs, which is defined as: ∑_pos = exp(c(F^p_cf,F^p_cb))+exp(c(F^p_sb,F^p_cb))+exp(c(F^p_sb,F^p_cf)). §.§.§ Data Interaction In Section <ref>, we discuss the contradicting modeling strategy to model the two tasks from the model correlation perspective. In this section, we further explore the task relationships from dataset perspective, and introduce data interaction as data augmentation. Sample selection principle: As shown in Fig. <ref>, saliency and camouflage are two properties that can transfer from each other. We find that there exist samples in the COD dataset that are both salient and camouflaged. We argue that those samples can be treated as hard samples for SOD to achieve robust learning. The main requirement is that the activation of those samples for SOD and COD should be similar. In other words, the predictions of the selected images for both tasks need to be similar. To select those samples from the COD dataset, we resort to weighted Mean Absolute Error (𝑤𝑀𝐴𝐸), and select samples in the COD dataset <cit.> which achieve the smallest 𝑤𝑀𝐴𝐸 by testing it using a trained SOD model. The weighted mean absolute error 𝑤𝑀𝐴𝐸 is defined as : 𝑤𝑀𝐴𝐸 = ∑_u=1^W∑_v=1^H |y^u, v - p^u,v |/∑_u=1^W∑_v=1^H y^u, v, where u,v is the pixel index, p represents the model prediction, y is the corresponding ground-truth, and W and H indicate size of y. Compared with mean absolute error, 𝑤𝑀𝐴𝐸 avoids the biased selection caused by different sizes of the foreground object(s). Data interaction: For the COD training dataset D_c ={x_i^c, y_i^c}_i=1^N_c and the trained SOD model M_θ_s, we obtain saliency prediction of the images in D_c as P^c_s=M_θ_s({x^c})={p^c_i}_i=1^N_c, where p_i^c is the saliency prediction of the COD training dataset. We assume that easy samples for COD can be treated as hard samples for SOD as shown in Fig. <ref>. Then we select M=403 samples D_c^M with the smallest 𝑤𝑀𝐴𝐸 in D_c via Eq. (<ref>), and add in our SOD training dataset <cit.> as a data augmentation technique. We show the selected samples in Fig. <ref>, which clearly illustrates the partially positive connection of the two tasks at the dataset level. §.§.§ Foreground Cropping as Data Augmentation: Considering the real-life scenarios, camouflaged objects can appear in different sizes, we introduce foreground cropping to achieve context-aware data augmentation. Note that we only perform foreground cropping for COD as the prediction of SOD is relatively stable with different sizes of the foreground object(s). Specifically, we first define the largest bounding box region that covers all the camouflaged objects as the compact cropping (CCrop). Then, we obtain the median cropping (MCrop) and loose cropping (LCrop) by randomly extending 0-80 and 0-150 pixels respectively randomly outward along the compact bounding box. We perform cropping on the raw images and resize the cropped image back to the pre-defined training image size for training. §.§ Uncertainty-aware Learning In Section <ref>, we discussed that both SOD and COD have inherent uncertainty, where the subjective nature of SOD poses serious model uncertainty <cit.> for SOD and difficulty of labeling introduces data uncertainty <cit.> for COD. As shown in Fig. <ref>, for the SOD dataset, the uncertainty comes from the ambiguity of saliency. For the COD dataset, the uncertainty mainly comes from the difficulty of labeling (the accuracy of y_i). To model the uncertainty of both tasks for reliable model generation, we introduce an uncertainty-aware adversarial training strategy to model the task-specific uncertainty in our joint learning framework. Adversarial learning framework: Following the conventional practice of generative adversarial network (GAN) <cit.>, we design a fully convolutional discriminator network to evaluate confidence of the predictions. The fully convolutional discriminator network D_γ consists of five SNconv layers <cit.> of kernel size 3× 3. As a conditional generation task, the fully convolutional discriminator takes the prediction/ground truth and the conditional variable,  the RGB image, as input, and produces a one-channel confidence map, where γ is the network parameter set. Note that we have batch normalization and leaky relu layers after the first four convolutional layers. D_γ aims to distinguish areas of uncertainty, which produce all-zero output with ground truth y as input, and produce |p-y| output with prediction map p as input. In our case, the fully convolutional discriminator aims to discover the hard (or uncertain) regions of the input image. We use the same structure of discriminators with parameter sets γ_s and γ_c for SOD and COD respectively, to identify the two types of challenging regions,  the subjective area for SOD, and the ambiguous regions for COD. Uncertainty-aware learning: For the prediction decoder module, we first have the task-specific loss function to learn each task. Specifically, we adopt the structure-aware loss function <cit.> for both SOD and COD, and define the loss function as: ℒ_str(p,y)=ω*ℒ_ce(p,y)+ℒ_iou^ω(p,y), where ω is the edge-aware weight, which is defined as ω=1+5* | (avg_pool(y)-y) |, y is task-specific ground truth, ℒ_ce is the binary cross-entropy loss, ℒ_iou^ω is the weighted boundary-IOU loss <cit.>. In this way, the task specific loss functions ℒ_str^s and ℒ_str^c for SOD and COD are defined as: ℒ_str^s=ℒ_str(G_β(F_α_s),y^s), ℒ_str^c=ℒ_str(G_β(F_α_c),y^c), To achieve adversarial learning, following <cit.>, we further introduce adversarial loss function to both SOD and COD predictors, which is defined as a consistency loss between discriminators prediction of prediction map and discriminators prediction of ground-truth, aiming to fool the discriminators that the prediction of SOD or COD is the actual ground truth. The adversarial loss functions (ℒ_adv^s and ℒ_adv^c) for SOD and COD, respectively, are defined as: ℒ_adv^s = ℒ_ce(D_γ_s(x^s,G_β(F_α_s)), D_γ_s(x^s,y^s)), ℒ_adv^c =ℒ_ce(D_γ_c(x^c,G_β(F_α_c)), D_γ_c(x^c,y^c)), Both the task specific loss in Eq. (<ref>), Eq. (<ref>) and the adversarial loss in Eq. (<ref>), Eq. (<ref>) are used to update the task-specific network (the generator). To update the discriminator, following the conventional GAN, we want it to distinguish areas of uncertainty clearly. Due to the inherent uncertainty that cannot be directly described, the uncertainty in inputting the ground truth cannot be accurately represented. However, because the correctly annotated regions are dominant in the complete dataset, we believe that the network can perceive the areas that are difficult to learn. The adversarial learning mechanism makes it difficult for the discriminator to distinguish between predicted and ground truth maps, and it can differentiate between noisy ground truth images and areas where RGB images cannot be aligned. Therefore, the output of the discriminator when inputting ground truth is defined as an all-zero map. Additionally, it produces a residual output for the prediction map. The outputs corresponding to different inputs of the discriminator are shown in Fig. <ref>. Then, the discriminators (D_γ_s and D_γ_c) are updated via: ℒ_dis^s=ℒ_ce(D_γ_s(x^s,G_β(F_α_s)), |G_β(F_α_s)-y^s|), + ℒ_ce(D_γ_s(x^s,y^s),0), ℒ_dis^c=ℒ_ce(D_γ_c(x^c,G_β(F_α_c)), |G_β(F_α_c)-y^c|), + ℒ_ce(D_γ_c(x^c,y^c),0), Note that the two discriminators are updated separately. §.§ Objective Function As a joint confidence-aware adversarial learning framework, we further introduce the objective functions in detail for better understanding of our learning pipeline. Firstly, given a batch of images from the SOD training dataset x^s, we define the confidence-aware loss with contrastive modeling for the generator as: ℒ^s = ℒ_str^s +λ_adv*ℒ_adv^s+λ_ctrs*ℒ_ctrs, where ℒ_str^s is the task specific loss, defined in Eq. (<ref>), ℒ_avd^s is the adversarial loss in Eq. (<ref>), and ℒ_ctrs is the contrative loss in Eq. (<ref>). The parameters λ_adv=1,λ_ctrs=0.1 are used to balance the contribution of adversarial loss/contrastive loss for robust training. Similarly, for image batch x^c from the COD training dataset, the confidence-aware loss with contrastive modeling for the generator is defined as: ℒ^c = ℒ_str^c + λ_adv*ℒ_adv^c+λ_ctrs*ℒ_ctrs. The discriminators are optimized separately, where D_γ_s and D_γ_c are updated via Eq. (<ref>) and Eq. (<ref>). Note that, we only introduce contrastive learning to our joint-task learning framework after every 5 steps, which is proven more effective in practice. We show the training pipeline of our framework in Algorithm <ref> for better understanding of the implementation details. § EXPERIMENTAL RESULTS §.§ Setting: Dataset: For salient object detection, we train our model using the augmented DUTS training dataset <cit.> via data interaction (see Sec. <ref>), and testing on six other testing dataset, including the DUTS testing datasets, ECSSD <cit.>, DUT <cit.>, HKU-IS <cit.>, PASCAL-S dataset <cit.> and SOD dataset <cit.>. For camouflaged object detection, we train our model using the benchmark COD training dataset, which is a combination of COD10K training set <cit.> and CAMO training dataset <cit.>, and test on four camouflaged object detection testing sets, including the CAMO testing dataset <cit.>, CHAMELEON <cit.>, COD10K testing dataset <cit.> and NC4K dataset <cit.>. Evaluation Metrics: We use four evaluation metrics to evaluate the performance of the salient object detection models and the camouflaged object detection models, including Mean Absolute Error (ℳ), Mean F-measure (F_β), Mean E-measure <cit.> (E_ξ) and S-measure <cit.> (S_α). Mean Absolute Error (ℳ): measures the pixel-level pairwise errors between the prediction s and the ground-truth map y, which is defined as: ℳ = ∑_u=1^W∑_v=1^H |y^u, v - s^u,v |/W × H, where W and H indicate size of the ground-truth map. Mean F-measure (F_β): measures the precision and robustness of the model, which is defined as: F_β = TP/TP + 1/2(FP + FN), where TP denotes the number of true positives, FP shows the false positives and FN indicates the false negatives. Mean E-measure (E_ξ): measures the pixel-level matching and image-level statistics of the prediction <cit.>, which is defined as: E_ξ = 1/W × H∑_u=1^W∑_v=1^H ϕ_p(u, v), where ϕ_p(u, v) is the alignment matrix <cit.>, measuring the alignment of model prediction and the ground truth. S-measure (S_α): measures the regional and global structural similarities between the prediction and the ground-truth <cit.> as: S_α = α· S_o + (1 - α) · S_r. where S_o measures the global structural similarity, in terms of the consistencies in the foreground and background predictions and contrast between the foreground and background predictions, S_r measures the regional structure similarity, and α = 0.5 balances the two similarity measures following <cit.>. Training details: We train our model in Pytorch with ResNet50 <cit.> as backbone, as shown in Fig. <ref>. Both the encoders for saliency and camouflage branches are initialized with ResNet50 <cit.> trained on ImageNet, and other newly added layers are initialized by default. We resize all the images and ground truth to 352×352, and perform multi-scale training. The maximum step is 30000. The initial learning rate are 2e-5, 2e-5 and 1.2e-5 with Adam optimizer for the generator, discriminators and contrastive module respectively. The whole training takes 26 hours with batch size 22 on an NVIDIA GeForce RTX 3090 GPU. §.§ Performance Comparison Quantitative Analysis: We compare the performance of our SOD branch with SOTA SOD models as shown in Table <ref>. One observation from Table <ref> is that the structure-preserving strategy is widely used in the state-of-the-art saliency detection models, SCRN <cit.>, F^3Net <cit.>, ITSD <cit.>, and it can indeed improve model performance. Our method shows significant improvement in performance on four evaluation metrics compared to other SOD methods, except for the SOD dataset <cit.>. Due to the small size of the SOD dataset <cit.>(300 images), we believe that fluctuations in predictions are reasonable. We also compare the performance of our COD branch with SOTA COD models in Table <ref>. Except for COD10k<cit.>, where our method is slightly inferior to ZoomNet <cit.>, our method shows significant superiority over all other COD methods on all datasets. The reason for this may be that ZoomNet <cit.> was tested at resolution 384 × 384, while our method was tested at resolution 352 × 352, and resolution can affect the performance of COD. The consistent best performance of our camouflage model further illustrates the effectiveness of the joint learning framework. Qualitative Analysis: Further, we show predictions of ours and SOTA models of SOD method in Fig. <ref>, and COD method in Fig. <ref>, where the Uncertainty is obtained based on the prediction from the discriminator. Fig. <ref> shows that we produce both accurate prediction and reasonable uncertainty estimation, where the brighter areas of the uncertainty map indicate the less confident regions. It can be observed that our approach can better distinguish the boundaries between salient objects and the background. Fig. <ref> illustrates that our proposed joint learning approach and random-sampling based foreground cropping can better localize camouflaged targets. Further, the produced uncertainty map clearly represents model awareness of the prediction, leading to interpretable prediction for the downstream tasks. Run-time Analysis: For COD task, the inference time of our model is 53.9 ms per image. And for SOD task, the inference time of our model is 40.4 ms per image on an NVIDIA GeForce RTX 3090 GPU, which is comparable to the state-of-the-art model in terms of speed. §.§ Ablation Study We extensively analyze the proposed joint learning framework to explain the effectiveness of our strategies, and show the performance of our SOD and COD models in Table <ref> and Table <ref> respectively. Note that, unless otherwise stated, we do not perform multi-scale training for the related models. Train each individual task: We use the same Feature encoder, Prediction decoder in Fig. <ref> to train the SOD model with original DUTS dataset and the COD model trained without random-sampling based foreground cropping following the same training related setting as in the Training details section, and show their performance as SSOD and SCOD, respectively. And we used the augmented DUTS dataset and foreground cropping COD training dataset to train the SOD model and the COD model separately, the results are shown as ASOD and ACOD. The comparable performance of SSOD and SCOD with their corresponding SOTA models proves the effectiveness of our prediction decoder. Further, the two data augmentation based models show clear performance improvement compared with training directly with the raw dataset, especially for the COD task, where foreground cropping is applied. We generated the augmented SOD dataset via data interaction (see Sec. <ref> and Fig. <ref>). Experimental results show a reasonable performance improvement, indicating that our proposed data augmentation techniques are effective in enriching the diversity of the training data. Joint training of SOD and COD: We train the Feature encoder and Prediction decoder within a joint learning pipeline to achieve simultaneous SOD and COD. The performance is reported as JSOD1 and JCOD1, respectively. For the COD task, there was a slight improvement in performance compared to the uni-task setting, indicating that under the joint learning framework, SOD can provide effective prediction optimization for COD. For SOD task, there was a slight decrease in performance, which we believe is due to the lack of consideration of the contradicting attribute between the two tasks. The subsequent experiments in the paper fully demonstrate this point. Joint training of SOD and COD with contrastive learning: We add the task connection constraint to the joint learning framework,  the contrastive module in particular, and show performance as JSOD2 and JCOD2 respectively. As discussed in Sec. <ref>, our contrastive module is designed to enhance the context information, and the final results show performance improvement for SOD. However, we observe deteriorated performance for COD when the contrastive module is applied. We have analyzed the predictions and find that the context enhancement strategy via contrastive learning can be a double-edged sword, which is effective for SOD but leads to performance deterioration for COD. Different from the conventional way of constructing positive/negative pairs based on augmentation or category information, SOD and COD are both class-agnostic tasks, and our positive/negative pairs are designed based on contrast information. Experimental results explain its effectiveness for high-contrast based foreground detection,  salient object detection, while minimal context difference between foreground and background of COD poses new challenges for applying contrastive learning effectively to achieve distinguishable foreground/background feature representation. Joint adversarial training of SOD and COD: Based on the joint learning framework (JSOD1 and JCOD1), we further introduce the adversarial learning pipeline, and show performance as JSOD3 and JCOD3. We observe relatively comparable performance of JSOD3 (JCOD3) to JSOD1 (JCOD1), explaining that the adversarial training pipeline will not sacrifice model deterministic performance. Note that with adversarial training, our model can output prediction uncertainty with single forward, serving as an auxiliary output to explain confidence of model output (see Uncertainty in Fig. <ref> and Fig. <ref>). The proposed joint framework: We report our final model performance with both the contrastive module and the adversarial learning solution as Ours. As a dual-task learning framework, Ours shows improved performance compared with models with each individual strategy,  contrastive learning and adversarial training. As discussed in Sec. <ref>, the former is introduced to model the task-wise correlation, and the latter is presented to model the inherent uncertainty within the two tasks. Although these two strategies show limitations for some specific datasets, we argue that as a class-agnostic task, both our contrast based positive/negative pair construction for contrastive learning and residual learning based discriminator learning within the adversarial training pipeline are effective in general, and more investigation will be conducted to further explore their contributions for the joint learning of the contradictory tasks. §.§ Framework Analysis As discussed in Sec. <ref>, SOD and COD are correlated from both task's point of view and the data's perspective. In this Section, we further analyze their relationships and the inherent uncertainty modeling techniques for SOD and COD. §.§.§ Data interaction analysis SOD and COD are both context based tasks (see Fig. <ref>), and can be transformed into each other, where the former represents the attribute of object(s) with high-contrast and the latter is related to concealment. Considering the opposite object attribute of saliency and camouflage, we introduce a simple data selection strategy as data augmentation for saliency detection. Based on the nature of the two task, we explicitly connected the SOD and COD datasets. Experimental results show that incorporating an additional 3.8% of data, specifically 403 out of 10,553 images, led to performance improvement for SOD, comparing ASOD and SSOD in Tabel <ref>. §.§.§ Task interaction analysis In our preliminary version <cit.>, we used the entire PASCAL VOC 2007 as a bridge dataset to model the contradictory properties of SOD and COD via similarity modeling. Here, we apply contrative learning based on contrast information instead, which is proven effective for SOD, comparing JSOD2 and JSOD1 in Tabel <ref>. As contrastive learning is sensitive to the positive/negative pools, and PASCAL VOC 2007 dataset contains samples that pose challenges for either SOD or COD to decide the foreground, we thus selected a portion of the images from the bridge dataset as the updated PASCAL dataset. Specifically, we tested the PASCAL VOC 2007 dataset using the trained SOD and COD models to obtain the weighted MAE of the SOD and COD prediction maps. Then, we selected 200 images from the PASCAL VOC 2007 dataset with the smallest weighted MAE as the new bridge dataset for training the contradicting modeling module. The contradicting module is trained every 5 steps of the other modules to avoid involving feature conflicting for COD. Although our contrastive learning solution is proven effective for SOD, the final performance still shows deteriorated performance of COD, comparing JCOD2 and JCOD1 in Tabel <ref>. The main reason is that the contrastive learning module tries to push the feature spaces of foreground and background to be close as Eq. (<ref>), while the main task of COD is to distinguish the foreground from the background. The contradicting objectives pose challenges for the COD task to converge. §.§.§ Discriminator analysis Considering that the uncertainty regions of both tasks are associated with the image, we concatenate the prediction/ground truth with the image, and feed it to the discriminator. We define the portions of a network's incorrect predictions as areas that are difficult to learn following <cit.>. In the early stages of training, the network fits the correctly annotated regions, and in later training, the predicted maps gradually approach the ground truth maps with the uncertainty/noise annotations <cit.>. When introducing image information, the areas that are difficult to predict or annotated incorrectly (inherent uncertainty) can be gradually discovered under the guidance of RGB image. §.§ Hyper-parameters analysis In our joint learning framework, several hyper-parameters affect our final performance, including the maximum iterations, the base learning rates, weights for the contrastive learning loss function and the adversarial loss function. We found that although the training dataset size of SOD is three times of the COD dataset, the COD images are more complex than the SOD images. Therefore, we kept the same numbers of iterations for SOD and COD tasks. Due to the overlapping regions of saliency and camouflage, for the contrastive learning module, we trained it every 5 steps to avoid involving too much conflicting to COD. With the same goal, we set the weight of the contrastive loss to 0.1. For the Confidence estimation module, we observed that excessively large adversarial training loss may lead to over-fitting on noise. Our main goal of using the adversarial learning is to provide reasonable uncertainty estimation. In this case, we define the ground truth output of the discriminator as the residual between the main network prediction and the corresponding ground truth, and set the weight of Eq. (<ref>) and Eq. (<ref>) as 1.0, to achieve trade-off between model performance and effective uncertainty estimation. § CONCLUSION In this paper, we proposed the first joint salient object detection and camouflaged object detection framework to explore the contradicting nature of these two tasks. Firstly, we conducted an in-depth analysis on the intrinsic relationship of the two tasks. Based on it, we designed a contrastive module to model the task-wise correlation, and a data interaction strategy to achieve context-aware data augmentation for SOD. Secondly, considering that camouflage is a local attribute, we proposed random sampling-based foreground-cropping as the COD data augmentation technique. Finally, uncertainty-aware learning is explored to produce uncertainty estimation with single forward. Experimental results across different datasets prove the effectiveness of our proposed joint learning framework. We observed that although contrast-based task-wise contrastive learning is proven effective for SOD, it damages the performance of COD due to the contradicting attribute of these two tasks. More investigation will be conducted to further explore informative feature representation learning via contrastive learning for class-agnostic tasks. ieeetr
http://arxiv.org/abs/2307.04482v1
20230710110437
Nonlinear and nonreciprocal transport effects in untwinned thin films of ferromagnetic Weyl metal SrRuO$_3$
[ "Uddipta Kar", "Elisha Cho-Hao Lu", "Akhilesh Kr. Singh", "P. V. Sreenivasa Reddy", "Youngjoon Han", "Xinwei Li", "Cheng-Tung Cheng", "Song Yang", "Chun-Yen Lin", "I-Chun Cheng", "Chia-Hung Hsu", "D. Hsieh", "Wei-Cheng Lee", "Guang-Yu Guo", "Wei-Li Lee" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ]
Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan Department of Physics, National Taiwan University, Taipei 10617, Taiwan Department of Physics, Applied Physics and Astronomy, Binghamton University, Binghamton, New York 13902, USA Department of Physics, California Institute of Technology, Pasadena, California 91125, USA Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Nano Science and Technology, Taiwan International Graduate Program, Academia Sinica and National Taiwan University, Taipei, Taiwan Physics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan Graduate Institute of Photonics and Optoelectronics, National Taiwan University, Taipei, Taiwan These authors contributed equally to the work. Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan Nano Science and Technology, Taiwan International Graduate Program, Academia Sinica and National Taiwan University, Taipei, Taiwan These authors contributed equally to the work. Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan These authors contributed equally to the work. Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan Department of Physics, National Taiwan University, Taipei 10617, Taiwan Department of Physics, California Institute of Technology, Pasadena, California 91125, USA Department of Physics, California Institute of Technology, Pasadena, California 91125, USA Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Graduate Institute of Photonics and Optoelectronics, National Taiwan University, Taipei, Taiwan Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Department of Physics, California Institute of Technology, Pasadena, California 91125, USA Department of Physics, Applied Physics and Astronomy, Binghamton University, Binghamton, New York 13902, USA Department of Physics, National Taiwan University, Taipei 10617, Taiwan Physics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan [email protected] Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan The identification of distinct charge transport features, deriving from nontrivial bulk band and surface states, has been a challenging subject in the field of topological systems. In topological Dirac and Weyl semimetals, nontrivial conical bands with Fermi-arc surfaces states give rise to negative longitudinal magnetoresistance due to chiral anomaly effect and unusual thickness dependent quantum oscillation from Weyl-orbit effect, which were demonstrated recently in experiments. In this work, we report the experimental observations of large nonlinear and nonreciprocal transport effects for both longitudinal and transverse channels in an untwinned Weyl metal of SrRuO_3 thin film grown on a SrTiO_3 substrate. From rigorous measurements with bias current applied along various directions with respect to the crystalline principal axes, the magnitude of nonlinear Hall signals from the transverse channel exhibits a simple sinα dependent at low temperatures, where α is the angle between bias current direction and orthorhombic [001]_ o, reaching a maximum when current is along orthorhombic [11̄0]_ o. On the contrary, the magnitude of nonlinear and nonreciprocal signals in the longitudinal channel attains a maximum for bias current along [001]_ o, and it vanishes for bias current along [11̄0]_ o. The observed α-dependent nonlinear and nonreciprocal signals in longitudinal and transverse channels reveal a magnetic Weyl phase with an effective Berry curvature dipole along [11̄0]_ o from surface states, accompanied by 1D chiral edge modes along [001]_ o. Nonlinear and nonreciprocal transport effects in untwinned thin films of ferromagnetic Weyl metal SrRuO_3 Wei-Li Lee August 12, 2023 ========================================================================================================= § INTRODUCTION Since the first experimental demonstration of a quantized conductance from counter-propagating edge spin channels in HgTe quantum well system <cit.>, topological materials have become one of the main research focuses in condensed matter physics and materials science. The two dimensional (2D) quantum spin Hall phase originates from inverted bulk bands that crosses near the system's boundary, revealing one dimensional helical edge states and thus the observed conductance quantization, which is also known as the 2D topological insulator (TI) phase and also recently reported in several other 2D systems <cit.>. Extending to 3D TI, the existence of a nontrivial bulk band topology with an intrinsic topological invariant gives rise to unusual gapless Dirac surface states, which was confirmed in experiments using surface sensitive angle-resolved photoemission spectroscopy and scanning tunneling microscopy <cit.>. More recently, a remarkable advancement was made by the observation of the quantized anomalous Hall conductance at zero magnetic field in a magnetic TI <cit.>, and it is a unique transport signature due to the topological nature of the system, which was theoretically predicted long ago <cit.>. In topological Dirac and Weyl semimetals (WSM), nontrivial crossings appear in the bulk bands near the Fermi surface <cit.>, and charge transport is overwhelmed by the unusual chiral charge excitations near nodal points with Berry phase π, showing superior electron mobility due to the suppressed backscattering by spin-momentum lock effect <cit.> and negative longitudinal magnetoresistance (MR) for aligned electric field and external magnetic field due to the chiral anomaly effect <cit.>. In addition, unique Fermi-arc surface states <cit.> appear on a surface of a WSM, connecting the projected Weyl-node pair, where a number of intriguing novel charge transport features have been predicted theoretically <cit.>. For a ferromagnetic WSM, there can be a minimum number of one Weyl-node pair with opposite chiral charges near the Fermi surface, accompanied by 1D chiral zero edge modes perpendicular to the connecting momentum of the Weyl-node pair. In this work, we report the experimental observations of nonlinear Hall signals <cit.> for T ≤ 10 K in the untwinned thin film of ferromagnetic Weyl metal SrRuO_3 (SRO) grown on a miscut SrTiO_3 (STO) substrate. Rigorous bias current dependent measurements of the nonlinear Hall signals correspond to an effective Berry curvature dipole (BCD) D⃗ from surface states along the orthorhombic [11̄0]_ o, where the subscript o refers to orthorhombic-phase. Surprisingly, a nonlinear and nonreciprocal transport effect in the longitudinal channel (NRTE) was also observed. It attains a maximum when the bias current is aligned perpendicular to D⃗, but it becomes vanishing small when bias current is parallel to D⃗, which can be attributed to the 1D chiral edge modes as demonstrated previously in the quantum anomalous Hall system <cit.>. Those results support the intriguing magnetic WSM phase in SRO/STO system with an effective surface D⃗ along [11̄0]_ o accompanied by 1D chiral edge modes along [001]_ o that circles around the surface of a SRO thin film. § EXPERIMENTAL SETUP SRO is known as a ferromagnetic and metallic oxide, showing an orthorhombic crystal structure with Pbnm space group symmetry at room temperature <cit.>. In the past, the observed non-monotonic magnetization dependent anomalous Hall conductivity <cit.>, unusual temperature dependent magnon gap <cit.> and softening of the magnon mode at low temperatures <cit.> all pointed to the existence of the Weyl nodes near the Fermi surface, supporting the Weyl metal phase in SRO system. Recently, the advancement in the growth of exceptional quality SRO thin films with ultra-low ruthenium vacancy level was made possible using oxide molecular beam system <cit.>. The low residual resistivity at T = 2 K of only about 10 μΩcm for a SRO film with thickness of about 10 nm <cit.>, which may largely suppress the smearing of the Weyl nodes due to the rare region effects <cit.>, makes it possible to explore various charge transport features associated with the Fermi-arc surface states and Weyl metal phase of SRO in thin film form <cit.>. Figure <ref>(a) shows an optical image of a sunbeam device patterned on an untwinned SRO thin film with a thickness t of about 13.7 nm. By using a STO (001) substrate with a miscut angle of about 0.1 degrees along one of the principal cubic axes, the volume fraction of the dominant domain was determined by high resolution X-ray scattering via the (02±1)_ o reflections to be about 95 % <cit.> (see Supplementary Note 1), where the orthorhombic crystalline directions are shown in Fig. <ref>(a). The right panel of Fig. <ref>(a) illustrates one of the Hall bars in the sunbeam device, and α defines the angle between the bias current direction and [001]_ o. ρ_ L and ρ_ T correspond to the longitudinal and transverse resistivity, respectively. With a compressive strain of about - 0.4 %, the Curie temperature T_ c for SRO thin film is about 150 K, and the magnetic easy axis is close to the film surface normal of [110]_ o <cit.>. Fig. <ref>(b) shows the α-dependent ρ_ L and ρ_ T values at three different applied field values of 0, - 1, and + 1 T along [110]_ o at T = 2 K. The ρ_ L appears to be at a maximum value of about 10.4 μΩcm for α = 90^ o and drops to a minimum value of about 8.1 μΩcm for α = 0 and 180^ o, exhibiting a clear cos(2α) dependence. On the other hand, the ρ_ T shows a sin(2α) dependence instead with a maximum magnitude of about 0.9 μΩcm at α = 45^ o and 135^ o. The simulated curves using a resistivity anisotropy model of ρ_ L and ρ_ T are shown as red curves in Fig. <ref>(b). We note that the amplitude of the anisotropy is significantly larger than the small changes in ρ_ L and ρ_ T when reversing the magnetization by changing the field from +1 to -1 T, inferring that the observed resistivity anisotropy in our SRO thin films is not dictated by the magnetization-related effects. The upper, middle, and lower panels of Fig. <ref>(c) show the temperature dependence of ρ_ L, ρ_ T, and ρ_ T/ρ_ L ratio, respectively, for different α values ranging from 0^ o to 180^ o. The residual resistivity ratio of ρ_ L(300K)/ρ_ L(5K) varies weakly and equals about 24.0 and 21.4 for α = 0^ o and 90^ o, respectively. Those results support the nearly single structure domain and thus untwinned nature in our SRO thin films, and also the exact dimensions of each Hall bar at different α values are very close to each other, which justifies the feasibility for the investigation of anisotropy effects in our SRO thin films. As the T decreases, we note that the magnitude of ρ_ T/ρ_ L ratio for α = 45^ o slightly decreases near the T_ c and then increases again below 100K, attaining a sizable ratio of ρ_ T/ρ_ L≈ - 0.085 at T = 2 K without saturation. Now, we turn to the discussions about the anomalous Hall effect (AHE) and the magnetization data in our SRO thin films. Figure <ref>(a) shows the field dependent Hall resistivity ρ_ xy at different temperatures ranging from 2 to 180 K, where weak field hysteresis loops in ρ_ xy-μ_ 0H curves with a small coercive field of less than 0.1 T were observed below T_ c as expected. The magnitude of converted Hall conductivity |σ_ xy| at zero field was plotted in Fig. <ref>(b) as a function of the corresponding conductivity σ_ xx in logarithmic scales for SRO thin films with different thicknesses ts ranging from 3.9 to 37.1 nm. Remarkably, |σ_ xy| appears to approach a constant and t-independent value of about 2.0 × 10^4 Ω^-1m^-1 at low temperatures, which falls in the same order as the intrinsic anomalous Hall conductivity due to the Berry curvatures of the bulk band, i.e., e^2/hc_ o≈ 5.0 × 10^4 Ω^-1m^-1 (c_ o being the orthorhombic lattice constant of about 7.81Å) shown as the red dashed line in Fig. <ref>(b). We note that no significant changes in the |σ_ xy| with σ_ xx down to T = 1.4 K, and this thus suggests a negligible contribution from the extrinsic skew scattering effect to AHE, where a linear relation of |σ_ xy| ∝σ_ xx is expected instead <cit.>. On the other hand, rigorous magnetization measurements were performed on a thicker SRO film with t ≈ 37.1 nm using a SQUID magnetometer. By subtracting the diamagnetic background at 200 K, the resulting magnetization M' - H curves at different temperatures are shown in Fig. <ref>(c), where, for μ_0H ≥ 2 T, the diamagnetic response seems to increase as the temperature drops. As shown in Fig. <ref>(d), the averaged slope of dM/dH for the field regime from μ_0H ≥ 2 T to 7 T was negative with increasing magnitude as the temperature decreases to 2 K, which is in big contrast to the nearly T-independent slope from the controlled measurements on a bare STO substrate (square symbols in Fig. <ref>(d)). The observed intrinsic |σ_ xy| ∼ e^2/hc_ o <cit.> and the enhanced diamagnetic response <cit.> at low temperatures strongly support the presence of the Weyl-nodes near the Fermi surface and thus the Weyl metal phase in SRO. We also remark that the zero-field Hall signals at low temperatures in SRO are dominated by the intrinsic AHE, which would be important for the subsequent discussions about the observed nonlinear Hall signals in SRO. § RESULTS As illustrated in the right panel Fig. <ref>(a), the second harmonic longitudinal (R_ L^2ω) and transverse (R_ T^2ω) resistance were measured with a bias current of 0.7 mA at a frequency of about 18.4 Hz. The resulting complex second harmonic signal can be expressed as R̃_ L(T)^2ω = R_ L(T)^2ωX + i R_ L(T)^2ωY, which is probed by a lock-in amplifier. The upper and lower panel of Fig. <ref>(a) shows the field dependent R_ L^2ωY and R_ T^2ωY, respectively, for α = 90^ o Hall bar device at different Ts ranging from 1.4 K to 10 K. For clarity, the curves of R_ L^2ωY- μ_0H and R_ T^2ωY- μ_0H at different Ts were systematically shifted upward by multiple of 100 μΩ and 50 μΩ, respectively. For T ≥ 10K, both R_ L^2ωY and R_ T^2ωY show no hysteresis loops in the weak field regime, which is in big contrast to the sizable ρ_ xy - μ_0H loops shown in Fig. <ref>(a) at similar temperatures. Below 6K, a sizable hysteresis loop starts to appear in R_ T^2ωY as shown in the lower panel of Fig. <ref>(a), but R_ L^2ωY remains nearly field-independent without showing a hysteresis loop. The definition of Δ R_ T^2ωY is illustrated in the lower panel of Fig. <ref>(a), and it corresponds to the change of the R_ T^2ωY signal at zero magnetic field when reversing the magnetization of the SRO thin film. For α = 90^ o Hall bar device with bias current I along [11̄0]_ o, the Δ R_ T^2ωY gradually increases in magnitude as T drops, giving a Δ R_ T^2ωY ≈ 44 μΩ at T = 1.4 K. Remarkably, for α = 180^ o Hall bar device with a bias current I along [001]_ o as demonstrated in Fig. <ref>(b), the hysteresis loops appear in the longitudinal channel of R_ L^2ωY at low temperatures instead, giving a value of Δ R_ L^2ωY ≈ 100 μΩ at T = 1.4 K, and no hysteresis loops were observed in the transverse channel (R_ T^2ωY). Figure <ref>(c) summarized the results from 9 different α values Hall bars from the sunbeam device shown in Fig. <ref>(a) (see Supplementary Note 2 for detailed descriptions on measurement geometry and polarity). The upper panel of Fig. <ref>(c) shows the first harmonic signals (Δ R_ L^ωX) and second harmonic signals (Δ R_ L^2ωY) in the longitudinal channel as a function of α with different Ts. Δ R_ L^2ωY exhibits a maximum value of about 100 μΩ at α = 0^ o and 180^ o, and it gradually decreases in magnitude to zero as α approaches 90^ o. In contrast, the first harmonic signals of Δ R_ L^ωX are nearly zero for all α and T values as expected. On the other hand, the lower panel of Fig. <ref>(c) puts together the α dependent first harmonic signals (Δ R_ T^ωX) and second harmonic signals (Δ R_ T^2ωY) in the transverse channel at different Ts. Unlike the longitudinal channel, the Δ R_ T^2ωY data show a relatively good agreement to the sinα dependence (dashed red line in the lower panel of Fig. <ref>(c)), giving a value of Δ R_ T^2ωY ≈ 44 μΩ at α = 90^ o and vanishing values for α = 0^ o and 180^ o. Such a unique sinα dependence in Δ R_ T^2ωY is drastically distinct from the nearly α-independent first harmonic signals of Δ R_ T^ωX. For consistency check, the current dependent R_ L^2ωY for α = 180^ o and R_ T^2ωY for α = 90^ o at T = 2 K were carried out and shown in the upper panel and lower panel, respectively, of Fig. <ref>(a) with different bias currents ranging from 0.3 to 0.9 mA, where the curves were systematically shifted upward for clarity. For α = 180^ o, Δ R_ L^2ωY progressive increases from 35 to 84 μΩ as the bias current I increases from 0.3 to 0.9 mA. The detailed I-dependent on the second harmonic signals (Δ R_ L(T)^2ωX + i Δ R_ L(T)^2ωY) were shown in the upper panel of Fig. <ref>(b), where only Δ R_ L^2ωY data show nearly I-linear dependent behavior, and all other second harmonic signals are vanishing small. On the contrary, for α = 90^ o, the Δ R_ T^2ωY increases from about 10 to 30 μΩ as I increases from 0.3 to 0.9 mA, and the corresponding I-dependent signals are shown in the lower panel of Fig. <ref>(b). The nearly I-linear dependence of Δ R_ T^2ωY for α = 90^ o only appears in the transverse channel but not in the longitudinal channel of Δ R_ L^2ωY, justifying the presence of nonlinear Hall effect in SRO thin films. The magnitude of both Δ R_ L^2ωY for α = 180^ o and Δ R_ T^2ωY for α = 90^ o grow rapidly as T drops below 10 K as shown in the upper panel and lower panel, respectively, of Fig. <ref>(c), which is dramatically different from the minor drops in ρ_L(T) and the nearly constant σ_ xy≡ρ_T/(ρ_L^2+ρ_T^2) with decreasing T as shown in Fig. <ref>(c) and Fig. <ref>(b), respectively. We also note that the extracted Δ R_ T^2ω and Δ R_ L^2ω do not vary significantly with the bias current frequency (see Supplementary Note 3), and they derive from the difference in the second harmonic signals between opposite magnetization directions in SRO at zero external magnetic field as illustrated in Fig. <ref>(a) and (b). Therefore, the extrinsic contact effects and also possible magnetic field related effects for NRTE and nonlinear Hall effects can be excluded <cit.>. § DISCUSSIONS For SRO thin films, the onset of ferromagnetism for T ≤ 150 K with magnetization along [110]_ o can, in principle, break the mirror planes with normal vectors perpendicular to the magnetization direction, and a similar mirror symmetry breaking by magnetism has been reported before <cit.>. We also conducted rotational anisotropy second harmonic generation measurements, which can be sensitive to the magnetic order parameter in perovskite transition metal oxides <cit.>. Figure <ref>(a) shows the temperature dependence of the scattering plane angle averaged SHG intensity from a SRO/STO film with t ≈ 35 nm, which exhibits an intensity upturn below 150 K. Although we did not resolve whether the magnetic order induced SHG susceptibility is directly proportional to the magnetization or to its square (as would be the case for magnetostriction), the critical temperature is consistent with that reported for bulk single crystals. We also noted a progressive increase in the SHG intensity as temperature decreases further, inferring an increased contribution from surface states with inversion symmetry breaking. However, we can not completely exclude the possible bulk inversion symmetry breaking in SRO/STO system at low temperatures due to possible lattice strain gradient <cit.> and non-collinear magnetic configuration effects <cit.> (see also Supplementary Note 4), which requires further investigations with advanced characterization tools at low temperatures. The growing surface states contribution at low temperatures is in accord with the dramatic changes of magnetotransport behavior below 10 K as demonstrated in Fig. <ref>. As T decreases from 10 K to 1.4 K, the weak field MR shows a crossover from a negative MR to a positive MR as shown in Fig. <ref>(a), and the Hall resistivity (Fig. <ref>(a)) also shows a nonlinear field dependence below 10 K, indicating a multiple channel conduction at lower temperatures. On the other hand, pronounced quantum oscillations with a frequency of about 28 T were observed for all α values in our sunbeam device as shown in Fig. <ref> (b) for α = 90^ o, and the corresponding Fast Fourier transform (FFT) spectra for different Ts were shown in Fig. <ref>(c). We note that 28 T quantum oscillations in SRO thin film were recently reported to behave as a 2D-like Fermi pocket with signatures that are consistent with Weyl-orbit quantum oscillation effect due to the bulk tunneling between the top and bottom Fermi-arc surface states <cit.>. The open black squares and open red circles in Fig. <ref>(d) plot the rapid increase of FFT amplitude for quantum oscillations below 10 K for α = 180^ o and 90^ o, respectively, which turns out to show a strong correlation with the rapid increases of Δ R_ L^2ω (solid black squares) and Δ R_ T^2ω (solid red circles). This is in big contrast to the minor decrease of resistivity (ρ_ L) from about 13.1 to 10.3 μΩcm as T goes from 10 to 2 K. Therefore, the rapid increases of the second harmonic signals of Δ R_ T^2ω and Δ R_ L^2ω below 10 K (Fig. <ref>(c)) are unlikely scaled with the bulk Drude electron lifetime. Instead, it signifies a crossover to a surface dominant charge transport with inversion symmetry breaking below 10 K. In a magnetic system with broken time reversal symmetry, both intrinsic and extrinsic AHE can contribute to the measured Hall signals <cit.>, and nonlinear Hall signals at the second harmonic generally require additional inversion symmetry breaking <cit.>. As demonstrated in Fig. <ref>(b), the low-temperature AHE in SRO was dominated by the contribution from the intrinsic AHE due to Weyl nodes near the Fermi-surface <cit.>, where σ_ xy is nearly a constant of about e^2/hc_ o down to about 1.4 K, and thus extrinsic skew scattering effect <cit.> shall not play a significant role for our observed nonlinear Hall signals. On the other hand, the distinct sinα dependence of Δ R_ T^2ω does not seem to be compatible with the intrinsic mechanism due to the electron-lifetime-independent Berry curvature effect <cit.>, where intrinsic AHE at zero field (Δ R_ T^ω) is nearly α independent as shown in the lower panel of Fig. <ref>(c). Therefore, the observed nonlinear Hall signals of Δ R_ T^2ωY is more likely deriving from the BCD <cit.> due to surface states with inversion symmetry breaking. From rigorously calculated band dispersions along k_ // and k_ z (see Supplementary Note 5), we found that most of Weyl nodes appear to tilt along the k_// and thus [11̄0]_ o. Taking Weyl node of W_||^1 with |ε-ε_ F|= 18.36 meV as an example, the band dispersions along k_// and k_ z were plotted in the left panel and right panel, respectively, of Fig. <ref>(e). It shows a large tilting of Weyl node along k_ //, but the band dispersion along k_ z is nearly symmetric with respect to the Weyl node. It is thus expected to have nonzero total BCD D⃗ arising from surface projected Weyl nodes along [11̄0]_ o as also supported by the α-dependent Δ R_ T^2ω. The BCD contribution to the second harmonic current density can be derived as j_a^2ω = χ_abc E_bE_c, and χ_abc≡ -ε_adce^3τ/2ħ^2(1+iωτ)D_bd. The BCD can be expressed as D_bd≡∫d^3k/(2π)^3f_0∂Ω_d/∂ k_b, where f_0 and Ω are the equilibrium Fermi-Dirac distribution and the Berry curvature, respectively, and it can be nonzero for systems with titled Weyl nodes and inversion asymmetry <cit.>. Therefore, with a bias current along b axis, the resulting nonlinear Hall current is simply j_a^2ω = χ_abb E_b^2 with χ_abb = e^3τ/2ħ^2(1+iωτ)D_bc, and thus j_a^2ω is a direct measure for the Berry curvature gradient along the bias current direction. In our sunbeam device with different bias current directions of α values ranging from 0^ o to 180^ o, a largest nonlinear Hall signal was observed with α = 90^ o, inferring the presence of an effective BCD D⃗ along [11̄0]_ o. In order to compare the magnitude of our observed nonlinear Hall effect with other systems, we adopted the 3D formula with resistivity anisotropy effect shown in Fig. <ref>(b). The α dependent Δ R_ T^2ω can be deduced to give Δ R_ T^2ω = χ_abbρ_aρ_b^2/Wt^2 Isinα, where ρ_b(ρ_a) is the resistivity along [11̄0]_ o([001]_ o), and W is the width of the Hall bar device (W = 150 μm) (see Supplementary Note 6). The sinα and I-linear dependences of Δ R_ T^2ω are well confirmed by the experiment shown in lower-panel of Fig. <ref>(c) and Fig. <ref>(b), respectively. By using a Drude electron lifetime of about τ_d ∼ 1.9 × 10^-13 s, the magnitude of the effective 3D BCD can be roughly estimated to be about |D⃗| ≈ 55, which falls in the same order of magnitude as several other reported 3D Weyl systems with large BCD <cit.>. On the other hand, the observation of a large NRTE of Δ R_ L^2ω in the longitudinal channel is intriguing, and its amplitude also grows with decreasing T below 10 K, suggesting an intimate relation with the appearance of the nonlinear Hall signals of Δ R_ T^2ω. However, as demonstrated in Fig. <ref>(c), the α dependence reveals a clear orthogonality in the Δ R_ L^2ω and Δ R_ T^2ω. We thus proposed a real space scenario as illustrated in Fig. <ref>(b), where a D⃗ along [11̄0]_ o is accompanied by 1D chiral edge modes along the orthogonal direction of [001]_ o (orange line). Figure <ref>(c) illustrates a minimum Weyl model with one pair of Weyl nodes with chiral charges of +1 and -1. For the yellow-shaded slice between Weyl node pair of opposite chiral charges, the integration of the total Berry flux across each 2D slice will give a Chern number of 1 accompanied by a unique 1D chiral edge modes at the boundary of the system as shown in the upper panel of Fig. <ref>(c) <cit.>. On the other hand, for green-shaded slice with the Weyl-node pair on the same side, the total Chern number is then zero without the presence of chiral edge modes. The Fermi-arc surface states are thus the zero energy chiral edge modes, connecting the non-overlapped Weyl-node pair on a surface Brillouin zone. By searching for Weyl nodes within an energy window of |ε-ε_ F| ≤ 20 meV in the calculated SRO band structure, a number of Weyl nodes can be identified and projected on (110)_ o plane as demonstrated in Fig. <ref>(d). Symbols of sphere, square and triangle correspond to Weyl nodes from three different band pairs. The red and blue colors represent the corresponding chiral charge of +1 and -1, respectively. We note that the yellow-shaded region in Fig.<ref>(d) highlights the non-zero total Chern number and thus supports the presence of 1D chiral edge modes along k_ z. When flipping the magnetization in SRO, the signs of the chiral charges also reverse due to the swapping of spin subbands, and both the directions of BCD D⃗ and 1D chiral edge modes will reverse accordingly. Such 1D chiral edge modes are equivalent to the 1D chiral edge modes in a magnetic TI with quantum anomalous Hall phase <cit.>, where a large NRTE in the longitudinal channel had been recently reported arising from the asymmetric scattering between the 1D chiral edge modes and other surface states <cit.>. For the Weyl metal SRO, in principle, similar NRTE in the longitudinal channel for bias current along [001]_ o (Δ R_ L^2ω for α = 0^ o and 180^ o) can thus appear due to the asymmetric scattering between the 1D chiral edge modes and the Fermi-arc surface states. This may also explain the vanishing of Δ R_ L^2ω for α = 90^ o and thus the intriguing orthogonal relation between Δ R_ L^2ω and Δ R_ T^2ω shown in Fig. <ref>(c). We note that our observed Δ R_ T^2ω due to an effective BCD of surface states may be related to a recently proposed theory <cit.> that a hotline with divergent Berry curvature, separating the Fermi-arc surface states and 3D bulk states, may lead to a large nonlinear Hall response. However, the issues regarding the contribution of Fermi-arc surface states to NRTE and nonlinear Hall effect call for more theoretical and experimental efforts. § CONCLUSIONS In summary, large nonlinear and nonreciprocal charge transport effects along the longitudinal (Δ R_ L^2ω) and transverse (Δ R_ T^2ω) channels were discovered below 10 K in a sunbeam device fabricated from an untwinned thin film SRO grown on miscut STO (001) substrate. Below 10 K, the crossover of weak field MR behavior and also the rapid rise of 2D-like quantum oscillation amplitude not only support the surface dominant charge transport but also agree well with the observed T dependent Δ R_ L(T)^2ω. The detailed bias current direction dependence reveals an intriguing orthogonality between the observed Δ R_ L^2ω and Δ R_ T^2ω, and, for bias current along [11̄0]_ o (α = 90^ o), Δ R_ T^2ω is at maximum while Δ R_ L^2ω is vanishing small. Considering the dominant roles of the intrinsic AHE and surface charge transport at low temperatures in thin films of SRO/STO system, a scenario of an effective BCD D⃗ from surface states along [11̄0]_ o accompanied by 1D chiral edge modes along [001]_ o was proposed to give a qualitative explanation for the observed α dependent Δ R_ L^2ω and Δ R_ T^2ω, which is supported by the calculated band dispersion with tilted Weyl nodes. Our findings demonstrate the feasibility of using the nonlinear and nonreciprocal charge transport effect as a probe for intriguing topology-related electronic properties in a topological system, such as the BCD from nonlinear Hall and 1D chiral edge modes from NRTE. On the other hand, our observations of nonlinear Hall in SRO/STO may also highlight the intriguing possibility of investigating surface dominant charge transport behavior in topological thin film systems. § METHODS The sunbeam device was patterned on a SRO/STO thin film with SRO layer thickness t ≈ 13.7 nm, using standard photolithography followed by argon ion milling. It comprises of 16 Hall bars with α ranging from 0^ o to 360^ o, and the angle difference between adjacent Hall bars is 22.5^ o. One of the Hall bars was carefully aligned along the SRO orthorhombic [001]_ o direction, which was defined as α = 0^ o. Each Hall bar has exactly the same geometry with a width of 150 μm and a length of 290 μm between longitudinal voltage leads. The Au (35 nm)/Ti (10 nm) electrodes were deposited and fabricated via a subsequent step of photolithography. The magnetization measurements on SRO/STO thin films were carried out using a SQUID-MPMS system from Quantum Design. The longitudinal (transverse) Δ R_ L (T)^ω and Δ R_ L(T)^2ω signals were measured simultaneously by a lock-in amplifier at first and second harmonic references, respectively. Rotational anisotropy (RA) SHG measurements were performed using a high-speed rotating scattering plane method described elsewhere <cit.>. The light source was a Ti:sapphire laser of central wavelength of 800 nm. The incident beam was focused onto the sample surface at oblique incidence (θ = 10^ o) with a spot size of ∼ 30 μm. Electronic structure calculations of SrRuO_3 were performed using projector augmented plane wave method <cit.> as implemented in the Vienna ab-initio Simulation package <cit.> within the generalized gradient approximation schemes<cit.>. A 18 × 18 × 14 Gamma centered k-point mesh was used in computations with a cutoff energy of 500 eV. The convergence criterion for the electronic density was defined as 10^-6 eV. The spin-orbit coupling effects were included in self-consistent calculations along with ferromagnetic spin polarization in (110) direction. The effect of electronic correlations in the Ru d states (4d^4 for Ru4^+ ) was taken into account by using the rotationally invariant GGA+U scheme <cit.> with U = 3.0 eV and J = 0.6 eV. We have used Ru d-orbital and O p-orbital to construct the Wannier functions <cit.> with VASP2WANNIER90 <cit.> interface. We have used WannierTools <cit.> to search the Weyl points and to identify the chirality of each Weyl point. § DATA AVAILABILITY All the supporting data are included in the main text and supplementary information. The raw data and other related data for this paper can be requested from W.L.L. § CODE AVAILABILITY The input files for DFT using VASP, Wannier tight binding and WannierTools are available upon reasonable request. § ACKNOWLEDGEMENTS This work was supported by the National Science and Technology Council of Taiwan (NSTC Grant No. 108-2628-M-001-007-MY3 and 111-2112-M-001-056-MY3) and the joint project of Academia Sinica and National Taiwan University (Grant No. AS-NTU-110-10). § COMPETING INTERESTS The authors declare no competing financial or non-financial interests. § AUTHOR CONTRIBUTIONS U.K., E.C.H.L., C.T.C., IC.C., and W.L.L. carried out the low-temperature magneto-transport measurements and data analyses. U.K. and A.K.S. grew the epitaxial SRO films. A.K.S., S.Y., C.Y.L., and C.H.H. performed the X-ray measurements at NSRRC in Taiwan. P.V.S.R., G.Y.G., and W.C.L. performed SRO band calculations. Y.J.H., X.W.L., and D.H. performed the SHG measurements and analysis. W.L.L. designed the experiment and wrote the manuscript. § ADDITIONAL INFORMATION Supplementary Information accompanies the paper on the XXXX website (https://XXXXX). 10 url<#>1urlprefixURL Konig2007 authorKönig, M. et al. titleQuantum spin Hall insulator state in HgTe quantum wells. journalScience volume318, pages766–770 (year2007). Du2015 authorDu, L., authorKnez, I., authorSullivan, G. & authorDu, R.-R. titleRobust helical edge transport in gated InAs/GaSb bilayers. journalPhys. Rev. Lett. volume114, pages096802 (year2015). Fei2017 authorFei, Z. et al. titleEdge conduction in monolayer WTe_2. journalNat. Physics volume13, pages677–682 (year2017). Tang2017 authorTang, S. et al. titleQuantum spin Hall state in monolayer 1T'-WTe_2. journalNat. Physics volume13, pages683–687 (year2017). Hsieh2008 authorHsieh, D. et al. titleA topological Dirac insulator in a quantum spin Hall phase. journalNat. volume452, pages970–974 (year2008). Alpi2010 authorAlpichshev, Z. et al. titleSTM imaging of electronic waves on the surface of Bi_2Te_3: Topologically protected surface states and hexagonal warping effects. journalPhys. Rev. Lett. volume104, pages016401 (year2010). Hasan2010 authorHasan, M. Z. & authorKane, C. L. titleColloquium: Topological insulators. journalRev. Mod. Phys. volume82, pages3045–3067 (year2010). Chang2013 authorChang, C.-Z. et al. titleExperimental observation of the quantum anomalous Hall effect in a magnetic topological insulator. journalScience volume340, pages167–170 (year2013). Kou2014 authorKou, X. et al. titleScale-invariant quantum anomalous Hall effect in magnetic topological insulators beyond the two-dimensional limit. journalPhys. Rev. Lett. volume113, pages137201 (year2014). Checkelsky2014 authorCheckelsky, J. G. et al. titleTrajectory of the anomalous Hall effect towards the quantized state in a ferromagnetic topological insulator. journalNat. Physics volume10, pages731–736 (year2014). Haldane1988 authorHaldane, F. D. M. titleModel for a quantum Hall effect without Landau levels: Condensed-matter realization of the "parity anomaly". journalPhys. Rev. Lett. volume61, pages2015–2018 (year1988). Wan2011 authorWan, X., authorTurner, A. M., authorVishwanath, A. & authorSavrasov, S. Y. titleTopological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates. journalPhys. Rev. B volume83, pages205101 (year2011). Wang2012 authorWang, Z. et al. titleDirac semimetal and topological phase transitions in A_3Bi (A = Na, K, Rb). journalPhys. Rev. B volume85, pages195320 (year2012). Liang2015 authorLiang, T. et al. titleUltrahigh mobility and giant magnetoresistance in the Dirac semimetal Cd_3As_2. journalNat. Mater. volume14, pages280–284 (year2015). Huang2015 authorHuang, X. et al. titleObservation of the chiral-anomaly-induced negative magnetoresistance in 3D Weyl semimetal TaAs. journalPhys. Rev. X volume5, pages031023 (year2015). Xiong2015 authorXiong, J. et al. titleEvidence for the chiral anomaly in the Dirac semimetal Na_3Bi. journalScience volume350, pages413–416 (year2015). Armitage2018 authorArmitage, N. P., authorMele, E. J. & authorVishwanath, A. titleWeyl and Dirac semimetals in three-dimensional solids. journalRev. Mod. Phys. volume90, pages015001 (year2018). Potter2014 authorPotter, A. C., authorKimchi, I. & authorVishwanath, A. titleQuantum oscillations from surface Fermi arcs in Weyl and Dirac semimetals. journalNat. Commun. volume5, pages5161 (year2014). Waw2021 authorWawrzik, D., authorYou, J.-S., authorFacio, J. I., authorvan den Brink, J. & authorSodemann, I. titleInfinite Berry curvature of Weyl Fermi arcs. journalPhys. Rev. Lett. volume127, pages056601 (year2021). Gao2014 authorGao, Y., authorYang, S. A. & authorNiu, Q. titleField induced positional shift of Bloch electrons and its dynamical implications. journalPhys. Rev. Lett. volume112, pages166601 (year2014). Sodemann2015 authorSodemann, I. & authorFu, L. titleQuantum nonlinear Hall effect induced by Berry curvature dipole in time-reversal invariant materials. journalPhys. Rev. Lett. volume115, pages216806 (year2015). Ma2019 authorMa, Q. et al. titleObservation of the nonlinear Hall effect under time-reversal-symmetric conditions. journalNat. volume565, pages337–342 (year2019). Yasuda2020 authorYasuda, K. et al. titleLarge non-reciprocal charge transport mediated by quantum anomalous Hall edge states. journalNat. Nanotechnology volume15, pages831–835 (year2020). Koster2012 authorKoster, G. et al. titleStructure, physical properties, and applications of SrRuO_3 thin films. journalRev. Mod. Phys. volume84, pages253–298 (year2012). Kar2021 authorKar, U. et al. titleHigh-sensitivity of initial SrO growth on the residual resistivity in epitaxial thin films of SrRuO_3 on SrTiO_3 (001). journalSci. Rep. volume11, pages16070 (year2021). Fang2003 authorFang, Z. et al. titleThe anomalous Hall effect and magnetic monopoles in momentum space volume302, pages92–95 (year2003). Chen2013 authorChen, Y., authorBergman, D. L. & authorBurkov, A. A. titleWeyl fermions and the anomalous Hall effect in metallic ferromagnets. journalPhys. Rev. B volume88, pages125110 (year2013). Itoh2016 authorItoh, S. et al. titleWeyl fermions and spin dynamics of metallic ferromagnet SrRuO_3. journalNat. Commun. volume7, pages11788 (year2016). Jenni2019 authorJenni, K. et al. titleInterplay of electronic and spin degrees in ferromagnetic SrRuO_3: Anomalous softening of the magnon gap and stiffness. journalPhys. Rev. Lett. volume123, pages017202 (year2019). Nair2018 authorNair, H. P. et al. titleSynthesis science of SrRuO_3 and CaRuO_3 epitaxial films with high residual resistivity ratios. journalAPL Mater. volume6, pages046101 (year2018). Taki2020 authorTakiguchi, K. et al. titleQuantum transport evidence of Weyl fermions in an epitaxial ferromagnetic oxide. journalNat. Commun. volume11, pages4969 (year2020). Cap2002 authorCapogna, L. et al. titleSensitivity to disorder of the metallic state in the ruthenates. journalPhys. Rev. Lett. volume88, pages076602 (year2002). Nand2014 authorNandkishore, R., authorHuse, D. A. & authorSondhi, S. L. titleRare region effects dominate weakly disordered three-dimensional Dirac points. journalPhys. Rev. B volume89, pages245110 (year2014). Kaneta2022 authorKaneta-Takada, S. et al. titleHigh-mobility two-dimensional carriers from surface Fermi arcs in magnetic Weyl semimetal films. journalnpj Quantum Mater. volume7, pages102 (year2022). kar2022 authorKar, U. et al. titleThe thickness dependence of quantum oscillations in ferromagnetic Weyl metal SrRuO_3. journalnpj Quantum Mater. volume8, pages8 (year2023). Nagaosa2010 authorNagaosa, N., authorSinova, J., authorOnoda, S., authorMacDonald, A. H. & authorOng, N. P. titleAnomalous Hall effect. journalRev. Mod. Phys. volume82, pages1539–1592 (year2010). Rao2014 authorRaoux, A., authorMorigi, M., authorFuchs, J.-N., authorPiéchon, F. & authorMontambaux, G. titleFrom dia- to paramagnetic orbital susceptibility of massless fermions. journalPhys. Rev. Lett. volume112, pages026402 (year2014). Sue2021 authorSuetsugu, S. et al. titleGiant orbital diamagnetism of three-dimensional Dirac electrons in Sr_3PbO antiperovskite. journalPhys. Rev. B volume103, pages115117 (year2021). Morimoto2016 authorMorimoto, T. & authorNagaosa, N. titleChiral anomaly and giant magnetochiral anisotropy in noncentrosymmetric Weyl semimetals. journalPhys. Rev. Lett. volume117, pages146603 (year2016). LiRH2021 authorLi, R.-H., authorHeinonen, O. G., authorBurkov, A. A. & authorZhang, S. S.-L. titleNonlinear Hall effect in Weyl semimetals induced by chiral anomaly. journalPhys. Rev. B volume103, pages045105 (year2021). Nandy2021 authorNandy, S., authorZeng, C. & authorTewari, S. titleChiral anomaly induced nonlinear Hall effect in semimetals with multiple Weyl points. journalPhys. Rev. B volume104, pages205124 (year2021). Torre2021 authorTorre, A. d. l. et al. titleMirror symmetry breaking in a model insulating cuprate. journalNat. Phys. volume17, pages777–781 (year2021). Seyler2020 authorSeyler, K. L. et al. titleSpin-orbit-enhanced magnetic surface second-harmonic generation in Sr_2IrO_4. journalPhys. Rev. B volume102, pages201113 (year2020). Hwang2012 authorHwang, H. Y. et al. titleEmergent phenomena at oxide interfaces. journalNat. Mater. volume11, pages103–113 (year2012). Pesq2012 authorPesquera, D. et al. titleSurface symmetry-breaking and strain effects on orbital occupancy in transition metal perovskite epitaxial films. journalNat. Commun. volume3, pages1189 (year2012). Sohn2021 authorSohn, B. et al. titleSign-tunable anomalous Hall effect induced by two-dimensional symmetry-protected nodal structures in ferromagnetic perovskite thin films. journalNat. Mater. volume20, pages1643–1649 (year2021). mSHG1 authorTrain, C., authorNuida, T., authorGheorghe, R., authorGruselle, M. & authorOhkoshi, S.-i. titleLarge magnetization-induced second harmonic generation in an enantiopure chiral magnet. journalJ. Am. Chem. Soc. volume131, pages16838–16843 (year2009). mSHG2 authorSun, Z. et al. titleGiant nonreciprocal second-harmonic generation from antiferromagnetic bilayer CrI_3. journalNature volume572, pages497–501 (year2019). Du2019 authorDu, Z. Z., authorWang, C. M., authorLi, S., authorLu, H.-Z. & authorXie, X. C. titleDisorder-induced nonlinear Hall effect with time-reversal symmetry. journalNat. Commun. volume10, pages3047 (year2019). Iso2020 authorIsobe, H., authorXu, S.-Y. & authorFu, L. titleHigh-frequency rectification via chiral Bloch electrons. journalSci. Adv. volume6, pageseaay2497 (year2020). He2021 authorHe, P. et al. titleQuantum frequency doubling in the topological insulator Bi_2Se_3. journalNat. Commun. volume12, pages698 (year2021). Wang2021 authorWang, C., authorGao, Y. & authorXiao, D. titleIntrinsic nonlinear Hall effect in antiferromagnetic tetragonal CuMnAs. journalPhys. Rev. Lett. volume127, pages277201 (year2021). Liu2021 authorLiu, H. et al. titleIntrinsic second-order anomalous Hall effect and its application in compensated antiferromagnets. journalPhys. Rev. Lett. volume127, pages277202 (year2021). Gao2023 authorGao, A. et al. titleQuantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure. journalScience pages10.1126/science.eadf1506 (year2023). Zhang2018 authorZhang, Y., authorSun, Y. & authorYan, B. titleBerry curvature dipole in Weyl semimetal materials: An ab initio study. journalPhys. Rev. B volume97, pages041101 (year2018). Du2018 authorDu, Z. Z., authorWang, C. M., authorLu, H.-Z. & authorXie, X. C. titleBand signatures for strong nonlinear Hall effect in bilayer WTe_2. journalPhys. Rev. Lett. volume121, pages266601 (year2018). Zhang2022 authorZhang, C.-L., authorLiang, T., authorKaneko, Y., authorNagaosa, N. & authorTokura, Y. titleGiant Berry curvature dipole density in a ferroelectric Weyl semimetal. journalnpj Quantum Mater. volume7, pages103 (year2022). Harter2015 authorHarter, J. W., authorNiu, L., authorWoss, A. J. & authorHsieh, D. titleHigh-speed measurement of rotational anisotropy nonlinear optical harmonic generation using position-sensitive detection. journalOpt. Lett. volume40, pages4671–4674 (year2015). Kresse authorKresse, G. & authorJoubert, D. titleFrom ultrasoft pseudopotentials to the projector augmented-wave method. journalPhys. Rev. B volume59, pages1758–1775 (year1999). vasp authorKresse, G. & authorFurthmüller, J. titleEfficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. journalPhys. Rev. B volume54, pages11169–11186 (year1996). PBE authorPerdew, J. P., authorBurke, K. & authorErnzerhof, M. titleGeneralized gradient approximation made simple. journalPhys. Rev. Lett. volume77, pages3865–3868 (year1996). Liechtenstein authorLiechtenstein, A. I., authorAnisimov, V. I. & authorZaanen, J. titleDensity-functional theory and strong interactions: Orbital ordering in mott-hubbard insulators. journalPhys. Rev. B volume52, pagesR5467–R5470 (year1995). Marzari authorMarzari, N. & authorVanderbilt, D. titleMaximally localized generalized wannier functions for composite energy bands. journalPhys. Rev. B volume56, pages12847–12865 (year1997). Mostofi authorMostofi, A. A. et al. titleAn updated version of wannier90: A tool for obtaining maximally-localised wannier functions. journalComput. Phys. Commun. volume185, pages2309–2310 (year2014). Franchini authorFranchini, C. et al. titleMaximally localized Wannier functions in LaMnO_3 within PBE + U, hybrid functionals and partially self-consistent GW: an efficient route to construct ab initio tight-binding parameters for e_ g perovskites. journalJournal of Physics: Condensed Matter volume24, pages235602 (year2012). QuanSheng authorWu, Q., authorZhang, S., authorSong, H.-F., authorTroyer, M. & authorSoluyanov, A. A. titleWanniertools: An open-source software package for novel topological materials. journalComput. Phys. Commun. volume224, pages405–416 (year2018). Roh2021 authorRoh, C. J. et al. titleStructural symmetry evolution in surface and interface of SrRuO_3 thin films. journalAppl. Surf. Sci. volume553, pages149574 (year2021).
http://arxiv.org/abs/2307.04035v1
20230708191401
A novel framework for Shot number minimization in Quantum Variational Algorithms
[ "Seyed Sajad Kahani", "Amin Nobakhti" ]
quant-ph
[ "quant-ph" ]
High Fidelity 3D Hand Shape Reconstruction via Scalable Graph Frequency Decomposition Tianyu Luan^1 Yuanhao Zhai^1 Jingjing Meng^1 Zhong Li^2 Zhang Chen^2 Yi Xu^2 Junsong Yuan^1 ^1State University of New York at Buffalo        ^2OPPO US Research Center, InnoPeak Technology, Inc. {tianyulu,yzhai6,jmeng2,jsyuan}@buffalo.edu {zhong.li,zhang.chen,yi.xu}@oppo.com =================================================================================================================================================================================================================================================================================================== Variational Quantum Algorithms (VQAs) have gained significant attention as a potential solution for various quantum computing applications in the near term. However, implementing these algorithms on quantum devices often necessitates a substantial number of measurements, resulting in time-consuming and resource-intensive processes. This paper presents a generalized framework for optimization algorithms aiming to reduce the number of shot evaluations in VQAs. The proposed framework combines an estimator and an optimizer. We investigate two specific case studies within this framework. In the first case, we pair a sample mean estimator with a simulated annealing optimizer, while in the second case, we combine a recursive estimator with a gradient descent optimizer. In both instances, we demonstrate that our proposed approach yields notable performance enhancements compared to conventional methods. § INTRODUCTION Variational Quantum Algorithms <cit.> have emerged as a promising solution for near-term applications of quantum computers. These versatile algorithms offer the capability to tackle a diverse range of complex problems, including but not limited to quantum chemistry <cit.>, combinatorial optimization <cit.>, and machine learning <cit.>. Despite their potential for near-term applications, variational algorithms often require a large number of measurements. This makes implementation of those algorithms on quantum devices extremely time and resource-intensive <cit.>, even when performed on shallow and low-width circuits. Various research efforts have sought to employ optimizers to reduce the computational burden of VQAs. These include application of both existing and novel optimization techniques <cit.>. Such approaches are related to well studied and rich literature on optimization of noisy functions in various fields such as signal processing and control theory (see for example <cit.> and <cit.>). Sweke et al.<cit.> introduced a quantum stochastic gradient descent optimizer that relies on a gradient estimator with a limited number of shots. They proved that with some simplifying assumptions this approach will converge to the optimal values. However, the convergence rate is dependent on the error of the estimator. In another study, Polloreno et al.<cit.> studied the robustness of a double simulated annealing optimizer against inherent quantum noise, even when only a few shots are available and the noise is noticeable. Another approach to solve this problem has been to employ a nested optimization framework in which a high-level optimizer is used to improve the performance of a low-level optimizer by tuning its parameters. For example, Tamiya et al.<cit.> employed Bayesian optimization on stochastic measurement results to determine the optimal step size through a line search. Inspired by stochastic gradient descent, this method incorporates an adaptive shot technique to reduce the number of measurements required during the line search. Similarly, Mueller et al.<cit.> proposed a technique to identify a suitable initial value set using Gaussian Processes. Subsequently, they utilized ImFil as the optimizer in their approach. In this work we propose a generalized framework for optimization algorithms which seek to reduce shot-number evaluations in VQAs. The key performance improving novelty in our approach are two fold. First, devising a framework to incorporate powerful estimation techniques to achieve near-true parameter estimates with much fewer data samples. Secondly, by utilizing the sensitivity analysis of the optimizers, it will be assured that the error level of estimators (and the number of shots as a result) are suitably chosen. This is made possible by breaking the problem into two separate estimation and optimization problems, and deriving theoretical results on the sufficient number of shot. We explore two specific case studies within this framework. For the first case, a sample mean estimator is paired with a simulated annealing optimizer, and in the second case, a recursive estimator is paired with a gradient descent optimizer. The remainder of the paper is organized as follows; In section <ref> background material, including quantum variational circuits, and estimation theory are presented. In section <ref> we develop the proposed error control strategy and discuss the resulting optimization framework. In section <ref> we present two case studies together with numerical results. Finally, in section <ref>, we conclude our work. § BASIC CONCEPTS §.§ Quantum Variational Algorithms 𝒞 ℝ In theory of quantum variational algorithms, the expected value of an observable O over a state, generated by applying the parameterized quantum circuit U(*θ) on the initial state |0⟩ is a required data. This value is used by cost function ∈^m to be minimized with respect to the parameter space *θ. Accordingly, the class of algorithms such as VQE, QAOA and QNN, can be formulated as <cit.>, *θ^* = min_*θ∈^m( U(*θ)^† O U(*θ)0 ). Specific details of these algorithms are available in <cit.>. Here we would like to focus on the underlying operation of these algorithms. Let, f^U, O(*θ) = U(*θ)^† O U(*θ)0, in which U and O may be omitted when discussion is not related to the specific choice of U and O. One of the simplest and widely used parameter-shift rules to compute the derivatives of f is given in Lemma <ref>. i [Parameter-shift rule <cit.>] under the circumstance that each the dependence of f to each parameter (like *θ_k) is in the form of e^*θ_k P_k where P_k is a Pauli operator, we have, ∂_k f(*θ) = f(*θ + e_k π / 2) - f(*θ - e_k π / 2)/2. Variable ∂_k is θ_k and e_k is the vector with 1 in the k-th position and 0 elsewhere. Lemma <ref> is not only useful in calculating the derivative of f, it can also be used to bound higher derivatives of f as shown in Lemma <ref>. For any *θ∈^m, we have, Hess f_2 ≤ mO_2. From the definition we know that f < O_2∀*θ∈^m. For any i and j there always exist some values of *θ_1, *θ_2, *θ_3, *θ_4 for which, Hess f_ij = f(*θ_1) - f(*θ_2) - f(*θ_3) + f(*θ_4)/4≤O_2. Accordingly, Hess f_2 ≤ mO_2. §.§ Estimation and Error Analysis Var MSE Bias Contrary to the simple definition of f^U, O, evaluating such an expected value at each sample point may involve measurements with respect to ℓ multiple bases. Accordingly, the observable O will be decomposed to ℓ observables, each of which is diagonal in a different basis, such as, O = ∑_j=1^ℓ V^†_j D_j V_j. For each ℓ, it is necessary to perform r_j repetitive measurements on a quantum circuit. The lth (out of r_j) measurement outcome will be considered as a sample from a random variable χ_j, l∼ X(UV_j, D_j, *θ). We know that 𝔼[χ_j,l] = f^UV_j, D_j(*θ) and this is the reason we typically define an estimator f^U, O(*θ) as follows. A sample mean estimator for f is defined as, f̂^U, O(*θ) = ∑_j=1^ℓ1/r_j∑_l = 1^r_jχ_j, l. And for any of ∂_k fs, ∂̂_k f^U, O(*θ) = ∑_j=1^ℓ1/2r_j+∑_l = 1^r_j+χ_j+, l - 1/2r_j+∑_l = 1^r_j-χ_j-, l. where χ_j+, l∼ X(UV_j, D_j, *θ + e_i π / 2) and χ_j-, l∼ X(UV_j, D_j, *θ - e_i π / 2). The performance of such an estimator can be bounded with the aid of the Hoeffding's inequality. The inequality provides confidence intervals of the estimators of bounded random variables. 𝔼 [Hoeffding's inequality <cit.>] For n random variables ξ_1, ξ_2, …, ξ_n with a_i ≤ξ_i ≤ b_i for all i, and any t > 0, we have, (∑_i=1^n ξ_i - ∑_i=1^n [ξ_i]≥ t) ≤ 2e^-2t^2/∑_i=1^n (b_i - a_i)^2. Based on this, the following bounds are obtained for the MSE (mean square error) and confidence interval (CI) of the sample mean estimator. [Sample mean estimator bounds] By defining, ϵ_f = ∑_j=1^ℓD_j^2_2/r_j, and, ϵ_∂_k f = ∑_j=1^ℓD_j^2_2/4(1/r_j+ + 1/r_j-). When ŝ is f̂^U, O or ∂̂_k f^U, O, it can be respectively bounded by ϵ_f and ϵ_∂_k f for any *θ and κ > 0 as follows, [ ŝ(*θ)] ≤ϵ, (ŝ(*θ) - s(*θ) > κ√(ϵ)) ≤ 2e^-κ^2/2. To prove the bounds for f, we start by setting ξs in Hoeffding's inequality to χ_j,l/r_j for different j and ls. They are bounded to -D_j/r_j≤χ_j,l/r_j≤D_j/r_j, it can thus be shown that, (f̂(*θ) - f(*θ) > t) ≤ 2e^-2t^2/4ϵ_f. It is now only required to replace t with κ√(ϵ_f). From Popoviciu's inequality <cit.> it is evident that [ξ_i] ≤b_i - a_i/4 which is used for the MSE of bounded random variables. The same results hold for the partial derivatives, if we set ξs to χ_j±,l/2r_j± for different j and l and + and - signs. § MAIN RESULTS §.§ Error Control Strategy As mentioned in the introduction, a key performance improving novelty of our work is the means to control the error level, as well as the number of shots. This will be possible by connecting the number of shots to the error level of any estimator, using the problem below. Contrary to the normal estimators that often use a constant number of shots without any further analysis, we intend to find a sufficient value for r_js such that the resulting estimation error is bounded by a specified amount. [Sufficient Number of Shots] Given an estimator ŝ, find the values of r_js which satisfy the following constraints, [ŝ] ≤ E_s. For the sample mean estimator discussed previously, solving Problem <ref>, for f^U, O and ∂_k f^U, O is equivalent to the following optimisation problems, r_j∈ℕargmin∑_j=1^ℓ r_j s. t. [f̂] ≤ E_f. r_j±∈ℕargmin∑_j=1^ℓ r_j+ + r_j- s. t. [∂̂_k f] ≤ E_∂_k f. Optimization problems <ref> and <ref> can be approximately solved using Algorithm <ref>. This algorithm solves the optimisations by relaxing MSE values to the bounds ϵ_f and ϵ_∂_k f defined in Theorem <ref> and limiting r_js and r_j±s to have real values. We can easily verify the algorithm by replacing the values using the formulas in Theorem <ref> and deduce that the algorithm not only bounds the MSE but also provides a CI for the values. §.§ Optimizing Agent Regardless of technical detail, the function of all variational algorithms can be considered as that of agent which interacts with a quantum computer as shown in Figure <ref>. Such a high level conceptualization permits development of a unified framework for the evaluation of f, ∂_k f and higher derivatives. Most general purpose optimizers will not aim to control the number of shots which is often taken as a constant during the optimization. There have been attempts to develop adaptive algorithms such as <cit.> but the scope of their application is limited. Any optimizing agent will ultimately utilize available data by calculating a set of estimators. Statistically, it is possible to reduce the number of estimators to a sufficient set of estimators. For most typical optimizer, those estimates will be limited to f̂^U, O(θ_i) and ∂̂_k f^U, O(θ_i), where f^U, O is the function that is being optimized. However, by application of sufficient shot problem proposed earlier, it is possible to control the optimization error, instead of the number of shots. In our view this is a more natural way of looking at the problem. In such an improved strategy, the optimizer is provided with the errors E_f and E_∂_k f instead of r_j, and solves for f̂, ∂̂_k f instead of χ_j, l. This is illustrated in Figure <ref>. For the sake of simplicity we shall henceforth refer to f^U, O(θ_i) and ∂_k f^U, O(θ_i) as f_i and ∂_k f_i respectively. Moreover, this strategy can also be extended to the sample mean estimator f̂_i and ∂̂_̂k̂f_i, defined in Definition <ref>. In the proposed framework the main problem is broken down into two separate problems. These are, * An optimization problem of uncertain values, with a sensitivity analysis * An estimation problem, with the question of sufficient shots for the estimator. In the proposed framework one is not limited to the sample mean estimator defined in Definition <ref> and can make use of any static or dynamic estimator. Dynamic estimators will also have an internal states which is shown by a gray arrow in Figure <ref>. We will demonstrate the profound effectiveness of this approach by introducing a few examples of estimators and optimizers in the following section. For the sake of illustrating the methodology we shall make use of existing standard and rather simple optimization and estimation techniques. Evidently the eventual obtainable performance improvements can be much greater by a well matched and individually powerful optimizer and estimator. § CASE STUDIES §.§ Example I: Error-Aware Simulated Annealing A simple simulated annealing algorithm is a stochastic process that starts from a random point in the search space and iteratively moves to a new point with a transition probability P based on the values and temperature T_i at step i. In order to introduce the uncertainty, we only need to redefine the transition probability P̂ based on the estimator as follows, P̂(*θ_i+1 | *θ_i) = 1 if f̂_i+1 < f̂_i e^-f̂_i+1 - f̂_i/T_i otherwise. Then, the sensitivity can be analyzed as follows. In order to maintain an accuracy for P̂(*θ_i+1 | *θ_i) we seek, [D_KL(P ∥P̂)] ≤η, where D_KL is the Kullback-Leibler divergence. We know that this equation will hold if, [logP(*θ_i+1 | *θ_i)/P̂(*θ_i+1 | *θ_i)] ≤η ∀*θ_i+1. The RHS could be bounded using [x - [x]] ≤√([x]) and the independence of f̂_i+1 and f̂_i and by assuming a monotonically decreasing temperature T_i+1 < T_i, [log P(*θ_i+1 | *θ_i) - logP̂(*θ_i+1 | *θ_i)] ≤1/T_i[f̂_i+1 - f̂_i - f_i+1 + f_i], ≤1/T_i√([f̂_i+1 - f̂_i]), ≤1/T_i√([f̂_i+1] + [f̂_i]) . Note that the estimators should be unbiased, otherwise the equation above will not hold. Finally we will introduce the condition below, that is sufficient for the equation above and furthermore to bound KL divergence by η, [f_i+1] ≤η^2 T^2_i/2. This is a more efficient condition for the estimator in comparison to the simply asking [f_i+1] ≤ E. In order to compare the performance of the simulated annealing with and without the sensitivity analysis, we conducted three experiments as follows, * Simple Optimizer (1): A simulated annealing optimizer with the condition [f_i+1] ≤ E with a high value for E. * Simple Optimizer (2): A simulated annealing optimizer with the condition [f_i+1] ≤ E with a low value for E. * Error-Aware Optimizer: A simulated annealing optimizer with Equation <ref> as the condition. For experimental studies, consider the benchmark problem defined in <ref>. [Benchmark problem] Assume a variational task with one qubit and U(θ) = R_x(θ) and O = Z with 𝒞 = I, which implies ℓ = 1 and m = 1. Also C(θ) = R_x^†(θ) Z R_x^†(θ)0 could be simplified further into cosθ. We start with an ensemble of θs near 0 and compare the distribution of the exact value of the function f through the optimization (with respect to the number of shots conducted) for each optimizer. The results are shown in Figure <ref>. To more clearly highlight the difference between the distributions, we have also plotted the distribution of data points after 7000 shots for each optimizer in Figure <ref>. Note that the error bound for different optimizers as a function of the number of shots is shown in Figure <ref> which is just a visualisation of condition <ref>. The results show that the error-aware simulated annealing is able to find a better solution with less number of shots. §.§ Example II: Recursive Estimator for Gradient Descent To illustrate the flexibility of the framework with respect to the choice of estimators and optimizers, in this section we perform experiments with a standard gradient descent algorithm and a novel recursive estimator for the function and its derivative. The proposed recursive estimator works on the assumption that the distance between two function evaluations required by the optimizer at two consecutive iterations is not great. That is, the function (and possibly its gradient) at a point *θ_i and its next evaluation at *θ_i+1 doesn't differ drastically from *θ_i. This assumption allows the update rule of the optimizer to be written in the form *θ_i+1 = *θ_i + δ*θ_i where δ*θ_i is a vector with bounded norm. The proposed recursive estimation methodology is formally defined in Definition <ref>. f̂^*_i = α_i(f̂^*_i-1 + δ*θ_i-1·f^*_i-1) + (1 - α_i) f̂_i ∂̂_̂k̂f^*_i = β_i ∂̂_̂k̂f^*_i-1 + (1 - β_i) ∂̂_̂k̂f_i , f̂^*_0 = f̂_0 ∂̂_̂k̂f^*_0 = ∂̂_̂k̂f_0 Note that α_is and β_is are values between 0 and 1 and act as hyperparameters which control the relative weight given to prior knowledge. The optimal values of these parameters are derives in later sections. First we present Theorem <ref> which derives theoretical bounds for the bias and variance of the estimate so obtained. [Recursive estimator bounds] For any i, [f̂^*_i] ≤ B_i [∂̂_̂k̂ f^*_i] ≤ B_∂_k, i. Where B_i and B_∂_k, i are calculated recursively as follows, B_i = α_i(B_i-1 + ∑_k=1^m (δ*θ_i-1)_k B_∂_k, i-1 + m/2δ*θ_i-1_2^2 O_2) B_∂_k, i = β_k,i(B_∂_k, i-1 + δ*θ_i-1_2 O_2) , B_0 = 0 B_∂_k, 0 = 0. and similarly for the variance, [f̂^*_i] ≤ A^2_i [∂̂_̂k̂f^*_i] ≤ A^2_∂_k, i. Using the notation in, Theorem <ref> A^2_i = α_i^2 (A^2_i-1 + ∑_k=1^m (δ*θ_i-1)_k^2 A^2_∂_k, i-1) + (1 - α_i)^2 ϵ^2_f_i A^2_∂_k, i = β_k,i^2 A^2_∂_k, i-1 + (1 - β_k,i)^2 ϵ^2_∂_k f_i, Defining the drift term d_i = f_i - 1 + δ*θ_i-1· f_i-1 - f_i, we can write the bias and variance of f̂^*_i as, [f̂^*_i] = α_i ([f̂^*_i-1] + δ*θ_i-1·[f^*_i-1] + d_i) [f̂^*_i] = α_i^2 ([f̂^*_i-1] + δ*θ_i - 1^2·[f^*_i-1]) + (1 - α_i)^2 [f̂_i]. In an abuse of notation, δ*θ^2_i-1 represents a vector of squared elements and [f^*_i-1] represents a vector of variances. This facilitates a more compact proof as shall be seen. With the same objective, we define another drift term for the derivatives of f as d_∂_k, i = ∂_k f_i - 1 - ∂_k f_i will helps us to write the bias and variance of ∂̂_̂k̂f^*_i as, [∂̂_̂k̂f^*_i] = β_k,i([∂̂_̂k̂f^*_i-1] + d_∂_k, i) [∂̂_̂k̂f^*_i] = β_k,i^2 [∂̂_̂k̂f^*_i-1] + (1 - β_k,i)^2 [∂̂_̂k̂f_i]. Combining Lemma <ref> with the mean value theorem, we have, d_i≤1/2δ*θ_i-1_2^2 m O_2 d_∂_k, i≤δ*θ_i-1_2 O_2. Finally, combining the above equations with the fact that [f̂_i] ≤ϵ^2_f_i and [∂̂_̂k̂ f_i] ≤ϵ^2_∂_k f_i completes the proof. For the confidence interval of recursive estimator, we can prove the following result, [Confidence Interval] As a result of Theorem <ref> the following equation is valid for s^* is any of f_is or ∂_k f_is, simply by setting corresponding A and Bs. [ŝ^*] ≤ B^2 + A^2, (ŝ^* - f > κ A + B) ≤ 2e^-κ^2/2. While the expression for the MSE is trivial, for the confidence interval we have, (f̂^*_i - [f̂^*_i] > κ√(A_i)) ≤ 2e^-κ^2/2. This is true because f̂^*_i is a linear combination of χs that are from bounded distributions. Accordingly, Hoeffding's inequality applies. Moreover, there is a one-to-one correspondence between bounds from Hoeffding's and Popoviciu's inequalities (see the proof of Theorem <ref>), which obviously validates the equation above. Since f̂^*_i - f_i > κ√(A_i) + B_i ⇒f̂^*_i - [f̂^*_i] > κ√(A_i), (f̂^*_i - f_i > κ√(A_i) + B_i) ≤(f̂^*_i - [f̂^*_i] > κ√(A_i)) ≤ 2e^-κ^2/2. Finally, we need to solve the sufficient shots problem (Problem <ref>) for the recursive estimator. The actual objective is to solve, r_j, i, r_j±,i∈ℕ, α_i, β_k,iargmin ∑_i=1^∞∑_j=1^ℓ r_j, i + ∑_k=1^m r_j+, k, i + r_j-, k, i s. t. ∀ i [f̂^*_i] ≤ E_f s. t. ∀ i, k [∂̂_k f^*_i] ≤ E_∂_k f. However, we solve an iterative version as in Algorithm <ref>, min_r_j ∈ℕ, α_i∑_j=1^ℓ r_j s. t. [f̂^*_i] ≤ E_f. min_r_j,±∈ℕ, β_k,i∑_j=1^ℓ r_j+ + r_j- s. t. [∂̂_k f^*_i] ≤ E_∂_k f. Combining the two leads to Algorithm <ref>. Note that with this algorithm, for the same error bound, the number of shots for a recursive estimator of a function will be at max equal to the number of shots for the naive estimator of that function. To illustrate the performance of Algorithm <ref>, first we apply the estimator for the variational Problem <ref> with a random (zero mean) initial point and a simple gradient-descent optimizer. Figure <ref> shows the estimated values (with CIs) of the loss function, for different estimators, as a function of the number of shots used to evaluate the function. It is evident that the proposed recursive estimator is outperforming the sample mean estimator by a significant margin. Another comparison made by visualizing number of shots per each GD iteration is shown in Figure <ref>. To verify the theoretical results derived earlier, the bounds on MSE and CI are compared with the actual values of the MSE and CI of the estimators in Figures <ref> and <ref> respectively. For further experimental verification, the same experiment has also been carried out on the more complex MaxCut problem for a square graph (V = 4 and E = 4). The results are shown in Figure <ref> and Figure <ref>. § CONCLUDING REMARKS In this paper, a generalized framework for optimization algorithms which seek to reduce shot-number evaluations in VQAs was proposed. In the general form, the proposed framework entails a combination of an estimator together with a numerical optimization algorithm. We introduced the sufficient shots problem and proposed an algorithm for it to be used with the sample mean estimator. This concept together with sensitivity analysis of optimizers, allows us to control the number of shots leading to a more natural and effective optimization process. Two specific case studies of this framework were subject to extensive experiments. In the first case, a sample mean estimator is coupled with a simulated annealing optimizer, and in the second case, a recursive estimator was coupled with a gradient descent optimizer. In both cases we demonstrated that the proposed approach achieves significant performance improvements over conventional methods. Our results highlight the importance of considering error control strategies and incorporating them into the design of optimizers for variational quantum algorithms. By leveraging estimators with error control and integrating them with interactive optimization processes, we can achieve better optimization performance and reduce the resource requirements for quantum computations. Overall, this work contributes to advancing the field of variational quantum algorithms by providing a systematic framework for designing error-aware optimizers. The presented approaches and results open up new possibilities for improving the efficiency and effectiveness of quantum computing research in various domains, such as quantum chemistry, combinatorial optimization, and machine learning. Future directions could explore further extensions and applications of the proposed framework, as well as experimental validations on quantum devices. § APPENDIX
http://arxiv.org/abs/2307.05283v1
20230711142432
On the Identity and Group Problems for Complex Heisenberg Matrices
[ "Paul C. Bell", "Reino Niskanen", "Igor Potapov", "Pavel Semukhin" ]
cs.DM
[ "cs.DM", "math.CO" ]
proof*[1] P.C. Bell et al. Keele University, UK [email protected] Liverpool John Moores University, UK {r.niskanen,p.semukhin}@ljmu.ac.uk University of Liverpool, UK [email protected] On the Identity and Group Problems for Complex Heisenberg Matrices Paul C. Bell1 Reino Niskanen2 Igor Potapov3 Pavel Semukhin2 ================================================================== We study the Identity Problem, the problem of determining if a finitely generated semigroup of matrices contains the identity matrix; see Problem 3 (Chapter 10.3) in “Unsolved Problems in Mathematical Systems and Control Theory” by Blondel and Megretski (2004). This fundamental problem is known to be undecidable for ℤ^4 × 4 and decidable for ℤ^2 × 2. The Identity Problem has been recently shown to be in polynomial time by Dong for the Heisenberg group over complex numbers in any fixed dimension with the use of Lie algebra and the Baker-Campbell-Hausdorff formula. We develop alternative proof techniques for the problem making a step forward towards more general problems such as the Membership Problem. We extend our techniques to show that the fundamental problem of determining if a given set of Heisenberg matrices generates a group, can also be decided in polynomial time. § INTRODUCTION Matrices and matrix products can represent dynamics in many systems, from computational applications in linear algebra and engineering to natural science applications in quantum mechanics, population dynamics and statistics, among others <cit.>. The analysis of various evolving systems requires solutions of reachability questions in linear systems, which form the essential part of verification procedures, control theory questions, biological systems predictability, security analysis etc. Reachability problems for matrix products are challenging due to the complexity of this mathematical object and a lack of effective algorithmic techniques. The significant challenge in the analysis of matrix semigroups was initially illustrated by Markov(1947), <cit.> and later highlighted by Patterson (1970) <cit.>, Blondel and Megretski (2004) <cit.>, and Harju (2009) <cit.>. The central reachability question is the Membership Problem: Decide whether or not a given matrix M belongs to the matrix semigroup S generated by a set of square matrices G. By restricting M to be the identity matrix, the problem is known as the Identity Problem. [Identity Problem] Let S be a matrix semigroup generated by a finite set of n×n matrices over 𝕂=,ℚ,𝔸,ℚ(),… Is the identity matrix I in the semigroup, i.e., does I∈ S hold? The Membership Problem is known to be undecidable for integer matrices from dimension three, but the decidability status of the Identity Problem was unknown for a long time for matrix semigroups of any dimension, see Problem 10.3 in “Unsolved Problems in Mathematical Systems and Control Theory” <cit.>. The Identity Problem was shown to be undecidable for 48 matrices from ℤ^4 × 4 in <cit.> and for a generator of eight matrices in <cit.>. This implies that the Group Problem (decide whether a finitely generated semigroup is a group) is also undecidable. The Identity Problem and the Group Problem are open for ℤ^3 × 3. The Identity Problem for a semigroup generated by 2 × 2 matrices was shown to be decidable in <cit.> and later improved by showing to be -complete in <cit.>. The only decidability beyond integer 2×2 matrices were shown in <cit.> for flat rational subsets of . Similarly to <cit.>, the work <cit.> initiated consideration of matrix decision problems in the Special Linear Group , by showing that there is no embedding from pairs of words into matrices from . Beyond the 2×2 case, the Identity Problem was shown to be decidable for the discrete Heisenberg group H(3,ℤ) which is a subgroup of . The Heisenberg group is widely used in mathematics and physics. This is in some sense the simplest non-commutative group, and has close connections to quantum mechanical systems <cit.>, harmonic analysis, and number theory <cit.>. It also makes appearances in complexity theory, e.g., the analysis and geometry of the Heiseberg group have been used to disprove the Goemans-Linial conjecture in complexity theory <cit.>. Matrices in physics and engineering are ordinarily defined with values over ℝ or ℂ. In this context, we formulate our decision problems and algorithmic solutions over the field of complex numbers with a finite representation, Gaussian rationals . The Identity Problem was recently shown to be decidable in polynomial time for complex Heisenberg matrices in a paper by Dong <cit.>. They first prove the result for upper-triangular matrices with rational entries and ones on the main diagonal, UT() and then use a known embedding of the Heisenberg group over algebraic numbers into UT(). Their approach is different from our techniques; the main difference being that <cit.> uses tools from Lie algebra, and in particular, matrix logarithms and the Baker-Campbell-Hausdorff formula, to reason about matrix products and their properties. In contrast, our approach first characterises matrices which are `close to' the identity matrix, which we denote Ω-matrices. Such matrices are close to the identity matrix in that they differ only in a single position in the top-right corner. We then argue about the commutator angle of matrices within this set in order to determine whether zero can be reached, in which case the identity matrix is reachable. We believe that these techniques take a step towards proving the decidability of the more general membership problem, which we discuss towards the end of the paper. A careful analysis then follows to ensure that all steps require only Polynomial time, and we extend our techniques to show that determining if a given set of matrices forms a group (the group problem) is also decidable in (this result is shown in <cit.> using different techniques). We thus present polynomial time algorithms for both these problems for Heisenberg matrices over ℚ(i) in any dimension n. These new techniques allow us to extend previous results for the discrete Heisenberg group H(n,ℤ) and H(n,ℚ) <cit.> and make a step forward towards proving the decidability of the membership problem for complex Heisenberg matrices. § ROADMAP We will give a brief overview of our approach here. Given a Heisenberg matrix M=[ 1 m_1^T m_3; 0 I_n-2 m_2; 0 0^T 1 ]∈, denote by ψ(M) the triple (m_1,m_2,m_3) ∈^2n-3. We define the set Ω⊆ as those matrices where m_1 and m_2 are zero vectors, i.e., matrices in Ω look like I_n except allowing any element of in the top right element. Such matrices play a crucial role in our analysis. In particular, given a set of matrices G = {G_1, …, G_t}⊆ generating a semigroup ⟨ G ⟩, we can find a description of Ω_⟨ G ⟩ = ⟨ G ⟩∩Ω. Since I∈Ω, the Identity Problem reduces to determining if I∈Ω_⟨ G ⟩. Several problems present themselves, particularly if we wish to solve the problem in Polynomial time (). The set Ω_⟨ G ⟩ is described by a linear set 𝒮⊆ℕ^t, which is the solution set of a homogeneous system of linear Diophantine equations induced by matrices in G. This is due to the observation that the elements (m_1,m_2) ∈^2n-4 behave in an additive fashion under multiplication of Heisenberg matrices. The main issue is that the size of the basis of 𝒮 is exponential in the description size of G. Nevertheless, we can determine if a solution exists to such a system in (<ref>), and this proves sufficient. The second issue is that reasoning about the element m_3 ∈ (i.e., the top right element) in a product of Heisenberg matrices is much more involved than for elements (m_1,m_2) ∈^2n-4. Techniques to determine if m_3 = 0 for an Ω-matrix within Ω_⟨ G ⟩ take up the bulk of this paper. The key to our approach is to consider commutators of pairs of matrices within G, which in our case can be described by a single complex number. After removing all redundant matrices (those never reaching an Ω-matrix), we have two cases to consider. Either every pair of matrices from G has the same angle of the commutator or else there are at least two commutators with different angles. The latter case is used in <ref>. It states that the identity matrix can always be constructed using a solution that contains four particular matrices. Let M_1, M_2, M_3 and M_4 be such that [M_1,M_2]=rexp(γ) and [M_3,M_4]=r'exp(γ'), where γ≠γ' so that pairs M_1, M_2 and M_3, M_4 have different commutator angles. We may then define four matrix products using the same generators but matrices M_1, M_2, M_3 and M_4 are in a different order. This difference in order and the commutator angles being different, ensures that we can control the top right corner elements in order to construct the identity matrix. <ref> provides details on how to calculate the top right element in these products. We then prove that these top right elements in the four matrices are not contained in an open half-plane and this is sufficient for us to construct the identity matrix. The above construction does not work when all commutators have the same angle, and indeed in this case the identity may or may not be present. Hence, we need to consider various possible shuffles of matrices in these products. To this end, we extend the result of <ref> to derive a formula for the top right element for any shuffle and prove it as <ref>. We observe that there is a shuffle invariant part of the product that does not depend on the shuffle, and that shuffles add or subtract commutators. Furthermore, this shuffle invariant component can be calculated from the generators used in the product. As we assume that all commutators have the same angle, γ, different shuffles move the value along the line in the complex plane defined by the common commutator angle which we call the γ-line. It is straightforward to see that if it is not possible to reach the γ-line using using the additive semigroup of shuffle invariants, then the identity cannot be generated. Indeed, since different shuffles move the value along the γ-line but the shuffle invariant part never reaches it, then the possible values are never on the γ-line, which includes the origin. We show that if it is possible to reach the γ-line using shuffle invariants and there are at least two non-commuting matrices in the used solution, then we can show that the identity matrix is in the semigroup (<ref>). Testing this property requires determining the solvability of a polynomially-sized set of non-homogeneous systems of linear Diophantine equations, which can be done in polynomial time by <ref>. If the γ-line can be reached only using commuting matrices, we can construct another system of linear Diophantine equations since the top right element has an explicit formula in terms of generators used (see <ref>). § PRELIMINARIES The sets of rational numbers, real numbers and complex numbers are denoted by , and . The set of rational complex numbers is denoted by ={a+b| a,b∈}. The set is often called the Gaussian rationals in the literature. A complex number can be written in polar form a+b=rexp(φ), where r∈ and φ∈ [0,π). We denote the angle of the polar form φ by (a+b). We also denote (a+b)=a and (a+b)=b. It is worth highlighting that commonly the polar form is defined for a positive real r and an angle between [0,2π). These two definitions are obviously equivalent. The identity matrix is denoted by I_n or, if the dimension n is clear from the context, by I. The Heisenberg group is formed by n × n matrices of the form M = [ 1 m_1^T m_3; 0 I_n-2 m_2; 0 0^T 1 ], where m_1,m_2 ∈𝕂^n-2, m_3 ∈𝕂 and 0 = (0, 0, …, 0)^T ∈^n-2 is the zero vector. It is easy to see that the Heisenberg group is a non-commutative subgroup of SL(n,𝕂)={M∈𝕂^n× n|(M)=1}. We will be interested in subsemigroups of which are finitely generated. Given a set of matrices G = {G_1, …, G_t}⊆, we denote the matrix semigroup generated by G as ⟨ G ⟩. Let M=[ 1 m_1^T m_3; 0 I_n-2 m_2; 0 0^T 1 ], then (M)_1,n = m_3 is the top right element. To improve readability, by ψ(M) we denote the triple (m_1,m_2,m_3) ∈^2n-3. The vectors m_1,m_2 play a crucial role in our considerations. We define the set Ω⊆ as those matrices where m_1 and m_2 are zero vectors, i.e., matrices in Ω look like I_n except allowing any element of in the top right element. That is, Ω={[ 1 0^T m_3; 0 I_n-2 0; 0 0^T 1 ]| m_3∈}, where 0 = (0, 0, …, 0)^T ∈^n-2 is the zero vector. Let us define a shuffling of a product of matrices. Let M_1,…, M_k∈ G. The set of permutations of a product of these matrices is denoted by (M_1,…, M_k)={M_σ(1)⋯ M_σ(k)|σ∈𝒮_k}, where 𝒮_k is the set of permutations on k elements. If some matrix appears multiple times in the list, say M_1 appears x times, we write (M_1^x,M_2,…,M_k) instead of (M_1,…,M_1_x times,M_2,…,M_k). We further extend this notion and for a matrix M=M_1M_2⋯ M_k∈⟨ G⟩, (M)=(M_1M_2⋯ M_k) and (M_1,M_2,…, M_k) denote the same set. Let M_1=[ 1 _1^T c_1; 0 I_n-2 _1; 0 0^T 1 ] and M_2=[ 1 _2^T c_2; 0 I_n-2 _2; 0 0^T 1 ]. By an abuse of notation, we define the commutator [M_1,M_2] of M_1 and M_2 by [M_1,M_2]=_1^T_2-_2^T_1∈. Note that the commutator of two arbitrary matrices A, B is ordinarily defined as [A, B] = AB - BA, i.e., a matrix. However, for matrices M_1, M_2 ∈, it is clear that M_1M_2-M_2M_1 = [ 0 ^T _1^T_2-_2^T_1; 0 O ; 0 0^T 0 ], where O is the (n-2)× (n-2) zero matrix, thus justifying our notation which will be used extensively. Observe that the matrices M_1, M_2 commute if and only if [M_1,M_2]=0. Note that the commutator is antisymmetric, i.e., [M_1,M_2]=-[M_2,M_1]. We further say that γ is the angle of the commutator if [M_1,M_2]=rexp(γ) for some r∈ and γ∈[0,π). If two commutators [M_1,M_2], [M_3,M_4] have the same angles, that is, [M_1,M_2]=rexp(γ) and [M_3,M_4]=r'exp(γ) for some r,r'∈, then we denote this property by [M_1,M_2][M_3,M_4]. If they have different angles, then we write [M_1,M_2][M_3,M_4]. By convention, if [M_1,M_2]=0, then [M_1,M_2][M_3,M_4] for every M_3, M_4 ∈. To show that our algorithms run in polynomial time, we will need the following lemma. * Let A∈^n× m be a rational matrix, and b∈^n be an n-dimensional rational vector with non-negative coefficients. Then we can decide in polynomial time whether the system of inequalities Ax≥b has an integer solution x∈^m. * Let A_1∈^n_1× m and A_2∈^n_2× m be a rational matrices. Then we can decide in polynomial time whether the system of inequalities A_1x≥0^n_1 and A_2x > 0^n_2 has an integer solution x∈^m. (i) We will show that the system Ax≥b has an integer solution x∈^m if and only if it has a rational solution x∈^m. One direction is obvious. So, suppose there is a rational vector x∈^m such that Ax≥b. Let r≥ 1 be the least common multiple of the denominators of all the coefficients of x. Let x' = rx∈^m. Hence we have Ax' = A(rx) = r(Ax) ≥ rb≥b, where the last two inequalities hold because b has non-negative coefficients and r≥ 1. The finish the proof, note that we can decide in polynomial time whether a system of linear inequalities has a rational solution using linear programming, see <cit.>. (ii) Since the system of inequalities is homogeneous, we can assume without loss of generality that matrices A_1, A_2 have integer coefficients. Hence the condition A_2x > 0^n_2 is equivalent to A_2x≥1^n_2 for any integer vector x∈^m, where 1^n_2 is a vector of dimension n_2 with coordinates 1. By the first part, we can decide in polynomial time whether the system of inequalities A_1x≥0^n_1 and A_2x≥1^n_2 has an integer solution x∈^m. § PROPERTIES OF OMEGA-MATRICES To solve the Identity Problem for subsemigroups of (Problem <ref>), we will be analysing matrices in Ω (matrices with all zero elements, except possibly the top-right corner value). Let us first discuss how to construct Ω-matrices from a given set of generators G ⊆. As observed earlier, when multiplying Heisenberg matrices of the form [ 1 m_1^T m_3; 0 I_n-2 m_2; 0 0^T 1 ], elements m_1 and m_2 are additive. We can thus construct a homogeneous system of linear Diophantine equations (SLDEs) induced by matrices in G. Each Ω-matrix then corresponds to a solution to this system. Let G={G_1,…,G_t}, where ψ(G_i)=(a_i,b_i,c_i). For a vector ∈^n-2, define () = (((1)), …, ((n-2)) (similarly for ()). We consider system A=0, where A=[ (_1) (_2) ⋯ (_t); (_1) (_2) ⋯ (_t); (_1) (_2) ⋯ (_t); (_1) (_2) ⋯ (_t) ], ∈^t and 0 is the 4(n-2)-dimensional zero vector; noting that A ∈ℚ^4(n-2) × t. Let 𝒮={s_1,…,s_p} be the set of minimal solutions to the system. Recall that elements of 𝒮 are irreducible. That is, a minimal solution cannot be written as a sum of two nonzero solutions. The set 𝒮 is always finite and constructable <cit.>. A matrix M_i∈ G is redundant if the ith component is 0 in every minimal solution s∈𝒮. Non-redundant matrices can be recognized by checking whether a non-homogeneous SLDE has a solution. More precisely, to check whether M_i is non-redundant, we consider the system A=0 together with the constraint that (i)≥1, where (i) is the ith component of . Using <ref>, we can determine in polynomial time whether such a system has an integer solution. For the remainder of the paper, we assume that G is the set of non-redundant matrices. This implicitly assumes that for this G, the set 𝒮≠∅. Indeed, if there are no solutions to the corresponding SLDEs, then all matrices are redundant. Hence G=∅ and I∉⟨ G⟩ holds trivially. Let M_1,…,M_k∈ G be such that X = M_1⋯ M_k∈Ω. The Parikh vector of occurrences of each matrix from G in product X may be written as x = (m_1, …, m_t) ∈ℕ^t. This Parikh vector x is a linear combination of elements of 𝒮, i.e., x = ∑_j=1^p y_j s_j, with y_j ∈ℕ, because x is a solution to the SLDEs. Each element of (M_1, …, M_k) has the same Parikh vector, but their product is not necessarily the same matrix; potentially differing in the top right element. Let us state some properties of Ω-matrices. The Ω-matrices are closed under matrix product; the top right element is additive under the product of two matrices; and Ω-matrices commute with Heisenberg matrices. In other words, let A, B ∈Ω and M∈, then * AB ∈Ω; * (AB)_1,n = A_1,n + B_1,n; * AM=MA. Furthermore, if N=M_1M_2⋯ M_k-1M_k ∈Ω for some M_1,…,M_k∈, then every cyclic permutation of matrices results in N. That is, N=M_2M_3⋯ M_kM_1=⋯= M_kM_1⋯ M_k-2M_k-1 The first three claims follow from the definition of Ω-matrices. Let us present the proofs for the sake of completeness. Let A=[ 1 0^T a_3; 0 I_n-2 0; 0 0^T 1 ] and B=[ 1 0^T b_3; 0 I_n-2 0; 0 0^T 1 ]. Now, AB=[ 1 0^T a_3+b_3; 0 I_n-2 0; 0 0 1 ] as was claimed in (i) and (ii). Let then M=[ 1 m_1^T m_3; 0 I_n-2 m_2; 0 0^T 1 ]. Now AM=[ 1 m_1^T a_3+m_3; 0 I_n-2 m_2; 0 0^T 1 ] and MA=[ 1 m_1^T m_3+a_3; 0 I_n-2 m_2; 0 0^T 1 ]. Let us proof the final claim. We will prove that M_1M_2⋯ M_k-1M_k= M_2M_3⋯ M_kM_1. The other cyclic permutations are proven analogously. Denote ψ(M_i)=(_i,_i,c_i) for i=1,…,k. Now by a direct computation, ψ(M_1M_2⋯ M_k-1M_k) = (∑_i=1^k_i,∑_i=1^k_i,∑_i=1^k c_i+∑_1≤ i<j≤ k_i^T _j) and ψ(M_2M_3⋯ M_kM_1) = (∑_i=1^k_i,∑_i=1^k_i,∑_i=1^k c_i+∑_2≤ i<j≤ k_i^T_j + ∑_i=2^k_i^T_1). As the resulting matrices are in Ω, ∑_i=1^k_i=∑_i=1^k_i=. Hence, we only need to prove that the third components are equal. Observe that -_1=∑_i=2^k _i and _1=-∑_i=2^k_i. Now ∑_i=2^k(_i^T·_1) = (∑_i=2^k _i^T )·_1 = (-_1^T)·(-∑_i=2^k_i)=∑_i=2^k (_1^T·_i). This allows us to rewrite ∑_2≤ i<j≤ k_i^T_j + ∑_i=2^k_i^T_1 as ∑_2≤ i<j≤ k_i^T_j + ∑_i=2^k_1^T_i=∑_1≤ i<j≤ k_i^T_j, which is the third component of the first product. We require the following technical lemma that allows us to calculate the value in top right corner for particular products. The claim is proven by a direct computation. Let M_1,M_2,…,M_k∈ such that M_1M_2⋯ M_k∈Ω and let ℓ≥ 1. Then, (M_1^ℓ M_2^ℓ⋯ M_k^ℓ)_1,n= ℓ∑_i=1^k (c_i-1/2_i^T_i)+ℓ^2/2∑_1≤ i<j≤ k-1[M_i,M_j], where ψ(M_i)=(a_i,b_i,c_i) for each i=1,…,k. Denote ψ(M_i)=(_i,_i,c_i). A direct calculation shows that the element in the top right corner is ∑_i=1^kℓ c_i + ∑_i=1^k ℓ(ℓ-1)/2_i^T_i+ℓ^2∑_1≤ i<j≤ k_i^T_j. That is, the coefficient of ℓ is already in the desired form. Let us rewrite the coefficient of ℓ^2 as follows: ∑_i=1^k-11/2_i^T_i+1/2_k^T_k+∑_1≤ i<j≤ k-1_i^T_j+∑_i=1^k-1_i^T_k =∑_i=1^k-11/2_i^T_i+∑_1≤ i<j≤ k-1_i^T_j+∑_i=1^k-1_i^T_k+_k^T_k-1/2_k^T_k =∑_i=1^k-11/2_i^T_i+∑_1≤ i<j≤ k-1_i^T_j+∑_i=1^k_i^T_k-1/2_k^T_k =∑_i=1^k-11/2_i^T_i+∑_1≤ i<j≤ k-1_i^T_j-1/2_k^T_k, where the first equality was obtained by taking terms with subindex k out of the sums, the second by adding 1/2_k^T_k-1/2_k^T_k, and the final two by observing that ∑_i=1^k_i=. Now, _k^T_k can be rewritten as _k^T_k=(∑_i=1^k-1_i)^T ·∑_j=1^k-1_j = ∑_1≤ i<j≤ k-1_i^T_j+∑_1≤ j<i≤ k-1_i^T_j+∑_i=1^k-1_i^T_i. Combining this with the previous equation, we finally obtain that the coefficient of ℓ^2 is ∑_i=1^k-11/2_i^T_i+∑_1≤ i<j≤ k-1_i^T_j-1/2(∑_1≤ i<j≤ k-1_i^T_j+∑_1≤ j<i≤ k-1_i^T_j+∑_i=1^k-1_i^T_i) = 1/2(∑_1≤ i<j≤ k-1_i^T_j-∑_1≤ j<i≤ k-1_i^T_j) = 1/2∑_1≤ i<j≤ k-1[M_i,M_j]. Thus completing the proof. If we further assume that the matrices from the previous lemma commute, then for every M∈(M_1^ℓ,M_2^ℓ,…, M_k^ℓ): M_1,n = ℓ∑_i=1^k (c_i-1/2_i^T_i)+ℓ^2/2∑_1≤ i<j≤ k-1[M_i,M_j] = ℓ∑_i=1^k (c_i-1/2_i^T_i), noting that [M_i,M_j] = 0 when matrices M_i and M_j commute. In Lemma <ref>, the matrix product has an ordering which yielded a simple presentation of the value in the top right corner. In the next lemma, we consider an arbitrary shuffle of the product and show that the commutators are important when expressing the top right corner element. Let M_1,M_2,…,M_k∈ such that M_1M_2⋯ M_k∈Ω and let ℓ≥ 1. Let M be a shuffle of the product M_1^ℓ M_2^ℓ⋯ M_k^ℓ by a permutation σ that acts on kℓ elements. Then (M)_1,n=ℓ∑_i=1^k (c_i-1/2_i^T_i)+ℓ^2/2∑_1≤ i<j≤ k-1 [M_i,M_j] -∑_1≤ i<j≤ kz_ji[M_i,M_j], where ψ(M_i)=(a_i,b_i,c_i) for i=1,…,k, and z_ji is the number of times M_j appears before M_i in the product; so z_ji is the number of inversions of i,j in σ. As in <ref>, we proceed by a direct calculation. Denote ψ(M_i)=(_i,_i,c_i). First, let us denote by z_ij the number of times matrix M_i is to the left of matrix M_j in M. Note, that z_ij+z_ji=ℓ^2 as there are in total ℓ^2 multiplications of M_i and M_j. Now, the direct calculation of the top right element in M is ∑_i=1^kℓ c_i + ∑_i=1^k ℓ(ℓ-1)/2_i^T_i+∑_1≤ i<j≤ kz_ij_i^T_j+∑_1≤ j<i≤ kz_ij_i^T_j. As in the proof of <ref>, the ℓ terms are as in the claim. Let us focus on the ℓ^2 term. We add ∑_1≤ i<j≤ kz_ji_i^T_j-∑_1≤ i<j≤ kz_ji_i^T_j=0 to the term, resulting in ℓ^2∑_i=1^k 1/2_i^T_i+∑_1≤ i<j≤ kz_ij_i^T_j+∑_1≤ j<i≤ kz_ij_i^T_j+∑_1≤ i<j≤ kz_ji_i^T_j-∑_1≤ i<j≤ kz_ji_i^T_j = ℓ^2∑_i=1^k 1/2_i^T_i+ℓ^2∑_1≤ i<j≤ k_i^T_j+∑_1≤ j<i≤ kz_ij(_i^T_j-_j^T_i) =ℓ^2/2∑_1≤ i<j≤ k-1[M_i,M_j]-∑_1≤ i<j≤ kz_ji[M_i,M_j]. In the above equation the first equality follows from the fact that z_ij+z_ji=ℓ^2 and the second equality can be obtained via analogous calculation as in the proof of <ref>. The crucial observation is that regardless of the shuffle, the top right corner element has a common term, namely ∑_i=1^k (c_i-1/2_i^T_i), plus some linear combination of commutators. We call the common term the shuffle invariant. Note that the previous lemmas apply to any Heisenberg matrices, even those in . For the remainder of the section, we restrict considerations to matrices in G. Let M_1,…,M_k∈ G be such that X = M_1⋯ M_k∈Ω. The Parikh vector of occurrences of each matrix from G in product X may be written as x = (m_1, …, m_t) ∈ℕ^t where t = |G| as before. Define Λ_x = ∑_i=1^tm_i(c_i-1/2_i^T_i) as the shuffle invariant of Parikh vector x. Note that the shuffle invariant is dependant only on the generators used in the product and the Parikh vector x. Let 𝒮 = {s_1, …, s_p}⊆ℕ^k be the set of minimal solutions to the system of linear Diophantine equations for G giving an Ω-matrix, as described in the beginning of the section. Each s_j thus induces a shuffle invariant that we denote Λ_s_j∈ as shown in Definition <ref>. The Parikh vector of any X = M_1M_2⋯ M_k with X ∈Ω, denoted x, is a linear combination of elements of 𝒮, i.e., x = ∑_j=1^p y_j s_j. We then note that the shuffle invariant Λ_x of x is Λ_x = ∑_j=1^p y_j Λ_s_j, i.e., a linear combination of shuffle invariants of 𝒮. Finally, it follows from Lemma <ref> that for any X ∈(M_1, M_2, …, M_k), where as before M_1M_2 ⋯ M_k ∈Ω and whose Parikh vector is x = ∑_j=1^p y_j s_j, the top right entry of X is equal to X_1,n = Λ_x + ∑_1≤ i<j≤ kα_ij[M_i,M_j] = ∑_j=1^p y_j Λ_s_j + ∑_1≤ i<j≤ kα_ij[M_i,M_j], where each α_ij∈ℚ depends on the shuffle. Furthermore, if a product of Heisenberg matrices is an Ω-matrix and all matrix pairs share a common angle γ for their commutators, then shuffling the matrix product only modifies the top right element of the matrix by a real multiple of exp(γ). This drastically simplifies our later analysis. § THE IDENTITY PROBLEM FOR SUBSEMIGROUPS OF H(N,Q(I)) In this section, we prove our main result. Let G⊆ be a finite set of matrices. Then it is decidable in polynomial time if I∈⟨ G⟩. The proof relies on analysing generators used in a product that results in an Ω-matrix. There are two distinct cases to consider: either there is a pair of commutators with distinct angles, or else all commutators have the same angle. The former case is considered in <ref> and the latter in <ref>. More precisely, we will prove that in the former case, the identity matrix is always in the generated semigroup and that the latter case reduces to deciding whether shuffle invariants reach the line defined by the angle of the commutator. The two cases are illustrated in <ref>. On the left, is a depiction of the case where there are at least two commutators with different angles, γ_1 and γ_2. We will construct a sequence of products where the top right element tends to r_1 exp(γ_1) with positive r_1 and another product that tends to r_2 exp(γ_1) with negative r_2. This is achieved by changing the order of matrices whose commutator has angle γ_1. Similarly, we construct two sequences of products where the top right elements tend to r_3exp(γ_2) and r_4exp(γ_2), where r_3 and r_4 have the opposite signs. Together these sequences ensure, that eventually, the top right elements do not lie in the same open half-planes. On the right, is a depiction of the other case, where all commutators lie on γ-line. In this case, the shuffle invariants of products need to be used to reach the line. Let G={G_1,…,G_t}⊆, where each G_i is non-redundant. Suppose there exist M_1, M_2, M_3, M_4∈ G such that [M_1,M_2][M_3,M_4]. Then I∈⟨ G⟩. Since all M_i's are non-redundant, there exists a product M_1 M_2 M_3 M_4 ⋯ M_k∈Ω, where k≥ 4 and every M_i is in G. We prove the lemma by considering four products M_1^ℓ M_2^ℓ (M_3M_4X)^ℓ, M_2^ℓ M_1^ℓ (M_3M_4X)^ℓ, M_3^ℓ M_4^ℓ (M_1M_2X)^ℓ and M_4^ℓ M_3^ℓ (M_1M_2X)^ℓ, where X=∏_i=5^kM_i. Denote ψ(M_i) = (_i,_i,c_i) for i=1,…,4 and ψ(X) = (x_1,x_2,x_3). Note that our assumption on the matrices M_1, M_2, M_3, M_4 implies that there are at least four distinct matrices in the product M_1M_2⋯ M_k∈Ω justifying the above notation. Indeed, if there were at most three, the assumption would not hold. Assume that X defined as above is the empty product and M_i=M_j for some i,j=1,2,3,4 and i≠ j. For the sake of clarity, we will assume that i=1 and j=3. The other cases are analogous. Now, [M_1,M_2]=_1^T_2-_2^T_1 and [M_1,M_4]=_1^T_4-_4^T_1, and since M_1M_2M_1M_4∈Ω, we also have 2_1+_2+_4 = and 2_1+_2+_4 =. That is, now [M_1,M_4]=_1^T_4-_4^T_1=-_1^T(2_1+_2)+(2_1+_2)^T_1 =_2^T_1-_1^T_2 = -[M_1,M_2], which implies [M_1,M_4] [M_1,M_2] contrary to our assumption. Let γ_1 and γ_2 be the angles of [M_1,M_2] and [M_3,M_4], respectively, where γ_1 ≠γ_2 by assumption. We show that, in the limit as ℓ tends to infinity, the angles of the top-right entries in the first two products tend to γ_1, but they approach γ_1-line (that is, the line defined by the vector rexp(γ_1), where r is a positive real number) from opposite direction, i.e., one approaches rexp(γ_1) and the other -rexp(γ_1). The same holds for the last two products and the angle γ_2. See the left half of <ref> for illustration. Let us consider the product M_1^ℓ M_2^ℓ (M_3M_4X)^ℓ for some ℓ∈. By <ref>, (M_1^ℓ M_2^ℓ (M_3M_4X)^ℓ)_1,n = ℓ(∑_i=1^4c_i+x_3 -1/2(_1^T_1+_2^T_2+(_3+_4+x_1)^T(_3+_4+x_2)) + ℓ^2/2[M_1,M_2]. That is, the coefficient of ℓ^2 is 1/2[M_1,M_2]. Similarly by <ref>, the coefficients of ℓ^2 in the top right elements of M_2^ℓ M_1^ℓ (M_3M_4X)^ℓ, M_3^ℓ M_4^ℓ (M_1M_2X)^ℓ and M_4^ℓ M_3^ℓ (M_1M_2X)^ℓ are 1/2[M_2,M_1], 1/2[M_3,M_4], 1/2[M_4,M_3], respectively. Let [M_1,M_2] = r_1exp(γ_1), [M_2,M_1] = r_2exp(γ_2), [M_3,M_4] = r_3exp(γ_3), and [M_4,M_3] = r_4exp(γ_4). It is convenient to consider these complex numbers as two-dimensional vectors. Recall that commutator is antisymmetric and hence r_1 = -r_2, γ_1=γ_2 and r_3 = -r_4, γ_3 = γ_4. By our assumption, [M_1,M_2][M_3,M_4] and thus γ_1≠γ_3. It follows that the four vectors are not contained in any closed half-plane. Indeed, r_1exp(γ_1) and -r_1exp(γ_1) define two closed half-planes, say H_1 and H_2. Any closed half-plane that contains both r_1exp(γ_1) and -r_1exp(γ_1) must be equal to either H_1 or H_2. As r_3 = -r_4, either r_3exp(γ_3) or -r_3exp(γ_3) is not in that half-plane. Let us express the top right elements as functions of power ℓ: (M_1^ℓ M_2^ℓ (M_3M_4X)^ℓ)_1,n=r_1,ℓexp(γ_1,ℓ), (M_2^ℓ M_1^ℓ (M_3M_4X)^ℓ)_1,n=r_2,ℓexp(γ_2,ℓ), (M_3^ℓ M_4^ℓ (M_1M_2X)^ℓ)_1,n=r_3,ℓexp(γ_3,ℓ), (M_4^ℓ M_3^ℓ (M_1M_2X)^ℓ)_1,n=r_4,ℓexp(γ_4,ℓ), where r_1,ℓ,r_2,ℓ,r_3,ℓ,r_4,ℓ∈ and γ_1,ℓ,γ_2,ℓ,γ_3,ℓ,γ_4,ℓ∈[0,π). Since r_1exp(γ_1), r_2exp(γ_1), r_3exp(γ_3), and r_4exp(γ_3) are the coefficients that multiply 1/2ℓ^2 in the formula for the top-right entry in the above products, and ℓ^2 is the highest power that appears there, we conclude that lim_ℓ→∞γ_1,ℓ = lim_ℓ→∞γ_2,ℓ = γ_1 and lim_ℓ→∞γ_3,ℓ = lim_ℓ→∞γ_4,ℓ = γ_3. Moreover, for sufficiently large ℓ, r_i,ℓ and r_i+1,ℓ have opposite signs, where i=1,3. Recall that r_1exp(γ_1), r_2exp(γ_2), r_3exp(γ_3), and r_4exp(γ_4) do not lie in the same closed half-plane. It follows that, for sufficiently large ℓ, the vectors r_1,ℓexp(γ_1,ℓ), r_2,ℓexp(γ_2,ℓ), r_3,ℓexp(γ_3,ℓ), and r_4,ℓexp(γ_4,ℓ) also do not lie in the same closed half-plane. See the left half of <ref> for illustration. Since they are not in the same closed half-plane, their positive linear combinations span the whole plane . In particular, for some ℓ≥ 1 and some x_1,x_2,x_3,x_4∈, not all of which are zero, we have x_1r_1,ℓexp(γ_1,ℓ)+x_2r_2,ℓexp(γ_2,ℓ)+x_3r_3,ℓexp(γ_3,ℓ)+x_4r_4,ℓexp(γ_4,ℓ)=0. This implies that (M_1^ℓ M_2^ℓ (M_3M_4X)^ℓ)^x_1(M_2^ℓ M_1^ℓ (M_3M_4X)^ℓ)^x_2(M_3^ℓ M_4^ℓ (M_1M_2X)^ℓ)^x_3(M_4^ℓ M_3^ℓ (M_1M_2X)^ℓ)^x_4 equals the identity matrix I, finishing the proof. It remains to consider the case when the angles of commutators coincide for each pair of non-redundant matrices. Our aim is to prove that, under this condition, it is decidable whether the identity matrix is in the generated semigroup. Let G={G_1,…,G_t}⊆ be a set of non-redundant matrices such that the angle of commutator [G_i,G_i'] is γ for all 1 ≤ i, i' ≤ t, then we can determine in polynomial time if I∈⟨ G⟩. Let {s_1, …, s_p}⊆ℕ^t be the set of minimal solutions to the SLDEs for G giving zeros in a and b elements. Each s_j induces a shuffle invariant Λ_s_j∈ as explained in Definition <ref>. Consider a product X = M_1⋯ M_k∈Ω, where each M_i∈ G. Let x = (m_1, m_2, …, m_t) ∈ℕ^t be the Parikh vector of the number of occurrences of each matrix from G in product X. Since X ∈Ω, we have x = ∑_j = 1^p y_j s_j, where each y_j ∈ℕ. Notice that X ∈(G_1^m_1, …, G_t^m_t). Hence, by Equation (<ref>), we have X_1,n = Λ_x + ∑_1≤ i<j≤ kα_ij[M_i,M_j] = ∑_j=1^p y_j Λ_s_j + r exp(γ), where α_ij∈ℚ and r∈ℝ depend on the shuffle. In other words, any shuffle of the product X will change the top right entry X_1,n by a real multiple of exp(γ). Let H_1, H_2 be the two open half-planes of the complex plane induced by exp(γ), that is, the union H_1∪ H_2 is the complement of the γ-line; thus 0∉H_1∪ H_2. We now prove that if {Λ_s_1, …, Λ_s_p}⊆ H_1 or {Λ_s_1, …, Λ_s_p}⊆ H_2 then we cannot reach the identity matrix. Assume that {Λ_s_1, …, Λ_s_p}⊆ H_1, renaming H_1, H_2 if necessary. Assume that there exists some product X = X_1 X_2 ⋯ X_k equal to the identity matrix, where k > 0 and X_j ∈ G. Then since X ∈Ω, we see from Equation (<ref>) that X_1,n = ∑_j=1^p y_j Λ_s_j + r exp(γ), where r∈ℝ. Clearly, ∑_j=1^p y_j Λ_s_j∈ H_1, and since y_j≠ 0 for at least one i, we have ∑_j=1^p y_j Λ_s_j≠ 0. Now, since r exp(γ) is on the γ-line, which is the boundary of H_1, the value X_1,n belongs to H_1 and cannot equal zero. This contradicts the assumption that X is the identity matrix. If {Λ_s_1, …, Λ_s_p} is not fully contained in either H_1 or H_2, then there are two possibilities. Either there exists some Λ_s_j∈ such that the angle of Λ_s_j is equal to γ (in which case such a Λ_s_j lies on the line defined by exp(γ)), or else there exist Λ_s_i, Λ_s_j such that 1 ≤ i < j ≤ p and Λ_s_i and Λ_s_j lie in different open half-planes, say Λ_s_i∈ H_1 and Λ_s_j∈ H_2. In the latter case, note that there exists x, y ∈ℕ such that xΛ_s_i + yΛ_s_j = rexp(γ) for some r ∈ℝ since Λ_s_i, Λ_s_j and the commutators that define the γ-line have rational components. It means that in both cases there exist z_1, …, z_p ∈ℕ such that ∑_j=1^pz_jΛ_s_j = rexp(γ) for some r ∈ℝ. Consider a product T = T_1 ⋯ T_k∈Ω, where each T_j∈ G and whose Parikh vector is equal to ∑_j=1^pz_js_j, where z_1, …, z_p ∈ℕ are as above. It follows from Equation (<ref>) that T_1,n = ∑_j=1^pz_jΛ_s_j + r'exp(γ) = r exp(γ) + r'exp(γ), where r, r' ∈ℝ and shuffles of such a product change only r'. We have two possibilities. Either T = T_1 ⋯ T_k is a product only consisting of commuting matrices from G, or else two of the matrices in the product of T do not commute. In the latter case, let us write T' = N_1N_2X' ∈(T_1, …, T_k), where N_1∈ G and N_2∈ G do not commute and X' is the product of the remaining matrices in any order. We observe that Lemma <ref> implies (N_1^ℓ_1N_2^ℓ_1X'^ℓ_1)_1,n = ℓ_1 rexp(γ) + ℓ_1^2/2 [N_1, N_2]= ℓ_1 rexp(γ) + ℓ_1^2/2 r'exp(γ) and (N_2^ℓ_2N_1^ℓ_2X'^ℓ_2)_1,n = ℓ_2 rexp(γ) + ℓ_2^2/2 [N_2, N_1]= ℓ_2 rexp(γ) - ℓ_2^2/2 r'exp(γ), for some 0≠ r' ∈ℝ. We then notice that ((N_1^ℓ_1N_2^ℓ_1X'^ℓ_1)^d_1(N_2^ℓ_2N_1^ℓ_2X'^ℓ_2)^d_2)_1,n = d_1(ℓ_1rexp(γ) + ℓ_1^2/2r'exp(γ)) + d_2(ℓ_2rexp(γ) - ℓ_2^2/2r'exp(γ)). Now, d_1(ℓ_1rexp(γ) + ℓ_1^2/2r'exp(γ)) + d_2(ℓ_2rexp(γ) - ℓ_2^2/2r'exp(γ)) = 0 ⟺ d_1(2ℓ_1r + ℓ_1^2r') + d_2(2ℓ_2r - ℓ_2^2r') = 0 ⟺ d_1(2r/r'ℓ_1 + ℓ_1^2) + d_2(2r/r'ℓ_2 - ℓ_2^2) = 0. By our assumption, the vectors rexp(γ) and r'exp(γ) have rational coordinates and the same angle γ. It follows that r/r'∈ℚ. Hence we may choose sufficiently large ℓ_1, ℓ_2 > 1 such that 2r/r'ℓ_1 + ℓ_1^2 and 2r/r'ℓ_2 - ℓ_2^2 have different signs, and then integers d_1, d_2 > 1 can be chosen that satisfy the above equation. This choice of ℓ_1, ℓ_2, d_1, d_2 is then such that (N_1^ℓ_1N_2^ℓ_1X'^ℓ_1)^d_1(N_2^ℓ_2N_1^ℓ_2X'^ℓ_2)^d_2 = I as required. Thus if such non-commuting matrices are present, we can reach the identity. We now show how to decide in polynomial time whether the γ-line can be reached by a pair of non-commuting matrices from G. We can compute a vector v with rational components that lies on the the γ-line in polynomial time, e.g., it can be any non-zero commutator. We must now determine if there exist a shuffle invariant that lies on the γ-line and which uses a pair on non-commuting matrices. This may be determined by deciding the solvability of a polynomially sized set of Non-homogeneous Systems of Linear Diophantine Equations (NSLDEs), as explained below. We will show that determining if such a NSLDE has a solution can be done in polynomial time, and therefore the above property can be decided in . We now outline the process. Let v ∈ be a complex number on the γ-line with rational components. We define its vectorization as (v) = ((v), (v)), i.e., splitting into real and imaginary components. The value v^⊥∈ whose vectorization (v^⊥) is perpendicular to (v) in the complex plane is defined as v^⊥ = exp(iπ/2)v noting that (v^⊥) has rational components such that (v^⊥) = (-(v)), (v)). Now, if all shuffle invariants are not contained within H_1 (or else H_2) then there exists two shuffle invariants, say Λ_s_1, Λ_s_2∈ such that (Λ_s_1)·(v^⊥) > 0 and (Λ_s_2)·(v^⊥) < 0, where · denotes the dot product, or else there exists a shuffle invariant Λ_s_3∈ such that (Λ_s_1)·(v^⊥) = 0, thus Λ_s_3 is on the γ-line. Recall that G = {G_1,…, G_t} and define y = (c_1-1/2a_1^Tb_1, …, c_t-1/2a_t^Tb_t) ∈^t. Consider the vector z = (v^⊥)(y) + (v^⊥)(y) ∈ℚ^t. Now, let x be the Parikh vector of some product of matrices from G that gives an Ω-matrix. Then for x and its shuffle invariant Λ_x∈, we have that z^Tx = (Λ_x)·(v^⊥). We will now derive a set of NSLDEs, to allow us to determine if it is possible to find a product of matrices, not all of which commute, whose shuffle invariant lies on the γ-line. We require a non-homogeneous system in order to enforce that the solution is non-commuting. We thus consider the following system of linear Diophantine equations A≥b, which we will now define. We may assume that not all matrices in generator set G = {G_1, …, G_t} commute, since we will deal with this subcase later. Let us therefore consider a pair of non-commuting matrices G_i, G_j ∈ G. In the construction of our NSLDE, submatrices A_1 and -A_1 are used to enforce that is the Parikh vector of an Ω-matrix M_ = M_1 M_2 ⋯ M_k, submatrices A_2, -A_2 are used to determine that the shuffle invariant Λ_ is on the γ-line, and A_i,j is used to ensure that there exists two non-commuting matrices in the product M_1 M_2 ⋯ M_k, namely matrices G_i and G_j. We will formulate such an NSLDE for each pair of non-commuting matrices. Let us define T_i,j≥b for every pair of non-commuting matrices G_i, G_j as follows: T_i,j = [ A_1; -A_1; A_2; -A_2; A_i,j ], A_1 =[ (_1) (_2) ⋯ (_t); (_1) (_2) ⋯ (_t); (_1) (_2) ⋯ (_t); (_1) (_2) ⋯ (_t); ], A_2 =[ z(1); z(2); ⋮; z(t) ]^T, A_i,j =[ e_i^T; e_j^T ], where ∈^t, b = (0^4(n-2), 0^4(n-2), 0, 0, 1, 1)^T; noting that T_i,j∈ℚ^(8(n-2)+4) × t. Here, e_i ∈{0, 1}^t denotes the i'th standard (column) basis vector, i.e., the all zero vector except e_i(i) = 1, and similarly for e_j. A solution x∈^t to this NSLDE implies that A_1 ≥0^4(n-2) and -A_1 ≥0^4(n-2), thus A_1 = 0^4(n-2), and therefore G_1^(1)⋯ G_t^(t)∈Ω, which was the first property that we wished to enforce. Secondly, we see that A_2≥ 0 and -A_2≥ 0 implies that Λ_x is orthogonal to v^⊥, i.e., the corresponding shuffle invariant is on the γ-line. Finally, A_i,j≥ (1,1)^T implies that matrix G_i was used at least once, and matrix G_j was used at least once. Therefore, if the above system has a solution, then there is a product containing non-commuting matrices G_i, G_j that gives an Ω-matrix, and whose shuffle invariant lies on the γ-line. By the previous reasoning, this implies that the identity matrix belongs to ⟨ G ⟩. Note that the vector b has non-negative coordinates, namely, 0s and 1s. Therefore, by <ref>, we can determine in polynomial time whether the system T_i,j≥b has an integer solution. There are t(t-1)/2 many pairs of i,j that we need to check. Hence we can determine in polynomial time if the γ-line can be reached by using a pair of non-commuting matrices (which implies that I∈⟨ G ⟩ as explained above). Finally, we consider the case when the above procedure gives us a negative answer, that is, all matrices which can be used to reach the γ-line commute. Since we assumed that {Λ_s_1, …, Λ_s_p} is not contained in H_1 or H_2, there are two cases to consider. Either there exist some Λ_s_i∈ H_1 and some Λ_s_j∈ H_2 or {Λ_s_1, …, Λ_s_p}⊆ H_1 ∪γ-line. (Actually, there is a third possibility that {Λ_s_1, …, Λ_s_p}⊆ H_2 ∪γ-line but it is similar to the second case by symmetry.) Note that we can determine in polynomial time which of these cases hold. Indeed, the first case holds if and only if there exist vectors x, y∈^t such that A_1 x = 0^4(n-2), A_1 y = 0^4(n-2), A_2x > 0, and A_2y < 0. To see this, take x∈^t such that A_1 x = 0^4(n-2) and A_2x > 0. Then x can be written as x = ∑_j=1^pz_js_j for some tuple (z_1,…,z_p)∈^p. So we have A_2x = ∑_j=1^pz_jA_2s_j > 0, which implies that A_2s_i > 0 for some i since all z_j are non-negative. Hence Λ_s_i∈ H_1. All the other cases can be considered in a similar way, and the existence of a solution to such system can be decided in polynomial time by the second part of <ref>. In the first case, all matrices from G can be used in some product that is equal to an Ω-matrix whose top-right entry lies on the γ-line. Indeed, since we assume that each matrix M_k from G is non-redundant, there is some s_i with non-zero kth coordinate. Now, the shuffle invariant Λ_s_i can be paired with some Λ_s_j from the other open half-plane to reach the γ-line. Namely, there exist x, y ∈ℕ such that xΛ_s_i + yΛ_s_j = rexp(γ) for some r ∈ℝ. Hence we can find a tuple (z_1,…,z_p)∈^p such that ∑_j=1^pz_jΛ_s_j = rexp(γ), for some r ∈ℝ, and the vector ∑_j=1^pz_js_j has only non-zero coordinates. This gives us a product with Parikh vector ∑_j=1^pz_js_j that uses all matrices from G and reaches the γ-line. On the other hand, in the second case, a matrix G_k∈ G can be used in some product that reaches the γ-line if and only if the kth coordinate of some s_i, for which Λ_s_i lines on the γ-line, is non-zero. In other words, precisely the following matrices can be used to reach the γ-line: {G_k ∈ G : there is some s_i such that s_i(k)>0 and Λ_s_i = rexp(γ) for some r ∈ℝ}. To show this, assume G_k belongs to a product that reaches the γ-line and let x∈^t be its Parikh vector. We can write x = ∑_j=1^pz_js_j, for some (z_1,…,z_p)∈^p. Note that Λ_x = ∑_j=1^pz_jΛ_s_j = rexp(γ) for some r ∈ℝ. Since we assumed {Λ_s_1, …, Λ_s_p}⊆ H_1 ∪γ-line, it follows that z_j=0 if Λ_s_j∈ H_1. This means that only those Λ_s_j that lie on the γ-line can appear in the linear combination Λ_x = ∑_j=1^pz_jΛ_s_j. Since G_k appears in the product, we have x(k) = ∑_j=1^pz_js_j(k) > 0. Thus z_js_j(k) > 0 for some j. In particular, s_j(k) > 0 and z_j>0, which implies that Λ_s_j is on the γ-line. Therefore, G_k belongs to the set (<ref>). Conversely, consider all matrices G_k ∈ G for which there is some s_i_k with the property that s_i_k(k)>0 and Λ_s_i_k lies on the γ-line. Let x be the sum of these s_i_k. In this case, Λ_x is on the γ-line, and we can construct a product with Parikh vector x that reaches the γ-line and contains all matrices from the set (<ref>), namely, it suffices to take any product with Parikh vector x. Note that the condition “there is some s_i such that s_i(k)>0 and Λ_s_i is on the γ-line” is equivalent to the following: there is some x∈^t with x(k)>0 such that A_1x = 0^4(n-2) and Λ_x is on the γ-line. The implication in one direction is obvious. Suppose there is x∈^t with x(k)>0 such that A_1x = 0^4(n-2) and Λ_x is on the γ-line. Then we can write x as a sum x = ∑_j=1^pz_js_j, where (z_1,…,z_p)∈^p. Since x(k)>0, there is j such that s_j(k) > 0. Also note that Λ_s_i must be on the γ-line, otherwise Λ_x would not be on the γ-line because z_j>0. Hence we can determine which matrices belong to (<ref>) by deciding if there is a solution to a system on linear equation similar to (<ref>), with the exception that matrix A_i,j should be replaced with A_k = (e_k^T) and vector b has the form b = (0^4(n-2), 0^4(n-2), 0, 0, 1)^T. This can be done in polynomial time by <ref>. In both cases, the set C = {G_1,…,G_t'}⊆ G of commuting matrices that are used to reach the γ-line can be computed in polynomial time. To finish the proof, note that by (<ref>), the top-right corner M_1,n of any M∈⟨ C⟩∩Ω can be expressed using the Parikh vector of the generators from C. This allows us to construct a new homogeneous system of linear Diophantine equations. Let A∈^4(n-2)× t' be defined as in Equation (<ref>) using only matrices present in C, and let B=(c_1-1/2_1^T_1,…,c_t'-1/2_t'^T_t'). We then construct a system [ A; B ]=0, where ∈^t' and 0 is the t'-dimensional zero vector. It is straightforward to see that if this system has a solution x, then G_1^x(1)G_2^x(2)⋯ G_t'^x(t')=I. By <ref> (see also <cit.>), we can decide if such a system has a non-zero solution in polynomial time. Lemmata <ref> and <ref> allow us to prove the main result, <ref>. Proof of <ref> Let G={G_1,…,G_t}. The first step is to remove all redundant matrices from G. Recall that we can check if a matrix is non-redundant by deciding if a system of non-homogeneous linear Diophantine equations, giving an Ω-matrix, has a solution. This was explain at the beginning of Section <ref>. It is decidable in polynomial time if there exists a solution to such a system. It is obvious that if there is no solution to the system, then the identity matrix is not in the generated semigroup. Indeed, there is no way to generate a matrix with zeroes in the a and b elements and thus I∉⟨ G⟩. If there is at least one solution, then we calculate commutators [G_i,G_j] for every pair G_i,G_j of non-redundant matrices from G. If there are at least two commutators with different angles, the identity matrix is in the semigroup by <ref>. If all commutators have the same angle, we apply <ref> to decide in polynomial time whether the identity matrix is in the semigroup. Note that there are O(t^2) commutators to calculate. Hence the whole procedure runs in polynomial time. The decidability of the Identity Problem implies that the Subgroup Problem is also decidable. That is, whether the semigroup generated by the generators G contains a non-trivial subgroup. However, the decidability of the Group Problem, i.e., whether ⟨ G⟩ is a group, does not immediately follow. Our result can be extended to show decidability of the Group Problem. It is decidable in polynomial time whether a finite set of matrices G⊆ forms a group. In order for ⟨ G⟩ to be a group, each element of G must have a multiplicative inverse. If there exist any redundant matrices in G, then ⟨ G ⟩ is not a group, since a redundant matrix cannot be part of a product giving even an Ω-matrix. Checking if a matrix is redundant can be done in polynomial time (see Section <ref>). Assuming then that all matrices are non-redundant, if there exist matrices M_1, M_2, M_3, M_4 ∈ G such that [M_1,M_2][M_3,M_4], then I∈⟨ G ⟩ by Lemma <ref>, and in fact we can find a product of matrices equal to the identity matrix which contains each matrix from G (since all matrices are non-redundant, thus M_1⋯ M_k ∈Ω may be chosen to contain all matrices). Thus each matrix of G has a multiplicative inverse as required. Next, assume that all matrices in G are non-redundant and share a common commutator angle. As explained in the proof of <ref>, we can compute in polynomial time a subset C⊆ G of matrices that can be used in a product which is equal to an Ω-matrix whose top-right element lies on the γ-line. Clearly, if C≠ G, then ⟨ G ⟩ is not a group. If C=G and G contains a pair of non-commuting matrices, then in the proof of <ref> we can choose a product T = T_1 ⋯ T_k∈Ω in such a way that it includes all matrices from G. Using the same idea we can construct a product that gives I and uses all matrices from G. Hence ⟨ G ⟩ is a group. Finally, we need to consider the case when C=G contains only commuting matrices. We can decide if there is a product that is equal to I and uses all matrices from G by solving a homogeneous system of linear Diophantine equations. Let A∈^4(n-2)× t be defined as in Section <ref>, and let A_2=(c_1-1/2_1^T_1,…,c_t-1/2_t^T_t). Define system [ A; A_2 ]=0^4(n-2)+1, where ∈^t and 0^4(n-2)+1 = (0, …, 0) ∈ℕ^4(n-2)+1. Solvability of this system implies that the identity matrix can be reached. Unfortunately, this does not guarantee that all matrices from G were used in such a product reaching I. We can however form the following non-homogeneous system of linear Diophantine equations [ A; -A; A_2; -A_2; I_t ]≥[ 0^4(n-2); 0^4(n-2); 0; 0; 1^t ], where 1^t = (1, …, 1) ∈ℕ^t and I_t is the t × t identity matrix. Solvability of this system is equivalent to the homogeneous system defined above, with the added constraint that I_t ≥1^t implying that all matrices must be used at least once in such a solution, as required for ⟨ G ⟩ to be a group. By <ref>, we can decide in polynomial time if the above system has a solution. § FUTURE RESEARCH We believe that the techniques, and the general approach, presented in the previous chapters can act as stepping stones for related problems. In particular, consider the Membership Problem, i.e., where the target matrix can be any matrix rather than the identity matrix. Let M=[ 1 m^T_1 m_3; 0 I_n-2 m_2; 0 0^T 1 ] be the target matrix and let G={G_1,…,G_t}, where ψ(G_i)=(a_i,b_i,c_i). Following the idea of Section <ref>, we can consider system A=(m_1,m_2), where ∈^t. This system is a non-homogeneous system of linear Diophantine equations that can be solved in . The solution set is a union of two finite solution sets, S_0 and S_1. The set S_0 being the solutions to the corresponding homogeneous system that can be repeated any number of times as they add up to 0 on the right-hand side. The other set, S_1, corresponds to reaching the vector (m_1,m_2). The matrices corresponding to the solutions in S_1 have to be used exactly this number of times. The techniques developed in Section <ref> allow us to manipulate matrices corresponding to solutions in S_0 in order to obtain the desired value in the top right corner. However, this is not enough as the main technique relies on repeated use of Ω-matrices. These can be interspersed with matrices corresponding to a solution in S_1 affecting the top right corner in uncontrollable ways. plainurl
http://arxiv.org/abs/2307.04483v1
20230710111105
Towards Hypersemitoric Systems
[ "Tobias Våge Henriksen", "Sonja Hohloch", "Nikolay N. Martynchuk" ]
math.SG
[ "math.SG", "37J35 53D20 70H06" ]
Invertible Low-Dimensional Modelling of X-ray Absorption Spectra for Potential Applications in Spectral X-ray Imaging Raziye Kubra Kumrular and Thomas Blumensath R. K. Kumrular and T. Blumensath are with the ISVR Signal Processing and Audio Hearing Group, University of Southampton, Southampton SO17 1BJ, U.K. (e-mail: [email protected] ) August 12, 2023 ============================================================================================================================================================================================================================================= This survey gives a short and comprehensive introduction to a class of finite-dimensional integrable systems known as hypersemitoric systems, recently introduced by Hohloch and Palmer in connection with the solution of the problem how to extend Hamiltonian circle actions on symplectic 4-manifolds to integrable systems with `nice' singularities. The quadratic spherical pendulum, the Euler and Lagrange tops (for generic values of the Casimirs), coupled-angular momenta, and the coupled spin oscillator system are all examples of hypersemitoric systems. Hypersemitoric systems are a natural generalization of so-called semitoric systems (introduced by Vũ Ngọc) which in turn generalize toric systems. Speaking in terms of bifurcations, semitoric systems are `toric systems with/after supercritical Hamiltonian-Hopf bifurcations'. Hypersemitoric systems are `semitoric systems with, among others, subcritical Hamiltonian-Hopf bifurcations'. Whereas the symplectic geometry and spectral theory of toric and semitoric sytems is by now very well developed, the theory of hypersemitoric systems is still forming its shape. This short survey introduces the reader to this developing theory by presenting the necessary notions and results as well as its connections to other areas of mathematics and mathematical physics. § INTRODUCTION Integrable Hamiltonian systems play an important role in mathematical and physical sciences. For instance, within celestial mechanics, there is the Kepler problem, and, within quantum mechanics, there is the Jaynes-Cummings model, which are both integrable. Integrable systems are very special dynamical systems exhibiting regular (as opposed to chaotic) behaviour in the sense that there exist a maximal number of (independent, see Definition <ref>) integrals of motion, allowing one to at least in principle integrate the equations of motion. Dynamics of a finite-dimensional integrable Hamiltonian system, defined by means of a proper momentum map (see Definition <ref>), is generically constrained to n-dimensional tori, where n is the number of degrees of freedom. These tori turn out to be Lagrangian submanifolds of the underlying symplectic manifold on which the Hamiltonian system is defined, and thus an integrable system can be seen as a singular Lagrangian torus fibration over a certain subset of ^n, see in particular the papers by Mineur <cit.>, Arnol'd <cit.>, Weinstein <cit.> and Duistermaat <cit.>. This motivates one to study integrable systems using techniques from symplectic geometry. The singular fibres of these singular Lagrangian torus fibrations reflect a non-trivial geometric or dynamical property of the underlying integrable system. The most prominent examples being the monodromy around a focus-focus point and bifurcations of Liouville tori, which we will address below. In the context of symplectic classification of integrable systems it is known how to classify a number of different types of such (`typical') singularities: a saddle singularity (in one degree of freedom) by Dufour, Molino, and Toulet <cit.>, an elliptic singularity (in any dimension) by Eliasson <cit.>, a focus-focus singularity (in dimension 2) by <cit.>, and a parabolic singularity by Bolsinov, Guglielmi, and Kudryavtseva <cit.> and Kudryavtseva and Martynchuk <cit.>. See also the recent breakthrough results concerning symplectic classification in the real-analytic category by Kudryavtseva <cit.> and by Kudryavtseva and Oshemkov <cit.>. In the context of global classification of integrable systems, Pelayo and Vũ Ngọc <cit.> showed that a large class of physically important systems known as semitoric systems are classified by a set of 5 invariants. This is one of the few known explicit results in the global symplectic classification of integrable systems, apart from the classical Delzant's <cit.> construction and the work of Zung <cit.> relating the semi-local (i.e. in a neighbourhood of a singular fibre) and global classification problems. We refer to Sections <ref> and <ref> for more details on semitoric systems. What is currently missing in the literature is a detailed discussion of systems beyond semitoric type: Whereas the topological classification of such systems is a well developed theory going back to Duistermaat and Fomenko and Zieschang (see e.g. Bolsinov and Fomenko <cit.> and the references therein), a more refined (e.g. symplectic) analysis is currently an open problem for in fact the majority of such systems. In particular, what is missing is a detailed analysis of a generalisation of semitoric systems additionally allowing hyperbolic-regular, hyperbolic-elliptic, and parabolic points, known as hypersemitoric systems. The latter class was introduced by Hohloch and Palmer <cit.> in connection with the problem of extending Hamiltonian circle actions on symplectic 4-manifolds to integrable systems, which they solved within this class of systems, see Hohloch and Palmer <cit.> for details. Hypersemitoric systems thus present a challenging platform for the further study by both geometers and analysists and this survey is devised as a quick introduction. Nevertheless, note that the class of hypersemitoric systems does not include all possible singularities that may arise in 4-dimensional integrable systems: the underlying global S^1-action prevents the existence of hyperbolic-hyperbolic singularities; moreover, the definition of hypersemitoric systems excludes most of the `typical' degenerate S^1-invariant singularities, see Kalashnikov's <cit.> list. There exists another class of integrable systems, namely hyperbolic semitoric systems (cf. <cit.>), which, if one considers the union with semitoric systems, contains hypersemitoric system, see Remark <ref>. The hyperbolic semitoric systems do include all `typical' degenerate S^1-invariant singularities in Kalashnikov's <cit.> list. §.§ Organization of the paper The rest of this paper is organized as follows: In Section <ref>, we give the definition of (Liouville) integrability, before defining toric, semitoric, and hypersemitoric systems. Moreover, we explain some important properties of integrable systems and give a short survey over the theory of atoms and molecules. In Section <ref>, we discuss semitoric systems in detail, i.e., their symplectic classification in terms of five invariants and how one may obtain a semitoric system from a toric one. Eventually, we recall some important examples. In Section <ref>, we consider hypersemitoric systems: we first discuss flaps and pleats, which occur in the momentum image of hypersemitoric systems. Then we consider how one may obtain hypersemitoric systems from (semi)toric systems before we briefly explain an explicit example. §.§ Acknowledgements The authors are very grateful to Álvaro Pelayo and San Vũ Ngọc for useful comments and suggestions that helped to improve the original version of this work. The first author was fully supported by the Double Doctorate Funding of the Faculty of Science and Engineering of the University of Groningen. Moreover, all authors were partially supported by the FNRS-FWO Excellence of Science (EoS) project `Symplectic Techniques in Differential Geometry' G0H4518N. § DEFINITIONS, CONVENTIONS, AND BACKGROUND In this section, we give an outline of integrability with an emphasis on integrable systems defined on 4-manifolds and admitting a global effective Hamiltonian circle action. Hypersemitoric systems are a certain class of systems of this type. We start by recalling the classical Arnol'd-Liouville-Mineur theorem, and then move from toric to semitoric to hypersemitoric systems. We also show how the theory relates to the general frameworks of monodromy and bifurcations of Liouville tori, i.e., Fomenko-Zieschang theory. §.§ Integrable systems Let (M, ω) be a symplectic manifold of dimension 2n. Since the symplectic form is non-degenerate, for any function f ∈ C^∞(M,), there exists a unique vector field X_f, called the Hamiltonian vector field of f, such that ι_X_fω = - df. The function f is called the Hamiltonian, and ż = X_f(z) is called a Hamiltonian system, sometimes briefly denoted by X_f. For two Hamiltonians f,g ∈ C^∞(M,), the Poisson bracket is defined by {f, g} := ω(X_f, X_g). If {f, g} = 0, then f and g are said to Poisson commute. Note that {f, g} = X_f(g). If f and g Poisson commute, then g is called a (first) integral of X_f. A Hamiltonian system X_H on a 2n-dimensional symplectic manifold (M, ω) is said to be completely integrable (or briefly integrable) if there exist n functionally independent integrals f_1 := H, f_2,…,f_n of X_H, i.e. their gradients are almost everywhere linearly independent on M, the integrals all Poisson commute with each other, and the flows of X_f_1, …, X_f_n are complete. A shorter notation is (M, ω, F=(f_1,…,f_n)) and F is often referred to as the momentum or integral map of the system. A point p∈ M is regular if the rank of DF_p is maximal and singular otherwise. A value of F is regular if all points in the preimage are regular, and singular otherwise. Similarly, one defines what it means for a fibre F^-1(r) of F to be regular, resp., singular and for a leaf of F, i.e. a connected component of a fibre, to be regular, resp. singular. The Arnol'd-Liouville-Mineur theorem <cit.> describes the regular leaves of the foliation generated by the momentum map of a 2n-dimensional integrable system. Each regular leaf is a Lagrangian submanifold, and if the leaf is connected and compact, then it is diffeomorphic to an n-torus T^n. Such a foliation will be called a Lagrangian torus fibration. Let r ∈^n be a regular value for the momentum mapping F, and let F^-1(r) be a connected and compact fibre, and hence diffeomorphic to T^n, and let U be a tubular neighbourhood of F^-1(r). The Arnol'd-Liouville-Mineur theorem also tells us that U is diffeomorphic to V × T^n, where V is an open set of ^n. On V × T^n, there exists coordinates I_1, …, I_n, ϕ_1, …, ϕ_n, called action-angle coordinates. Here each I_i for i = 1, …, n is a function of the f_i's, whilst each ϕ_i is a standard angle coordinate on T^n. In action-angle coordinates, the symplectic form becomes ω = ∑ dϕ_i∧ dI_i. Note that, in general, action-angle coordinates only exist locally. Duistermaat <cit.> showed that there can exist obstructions to the global existence of action-angle coordinates in terms of the (Hamiltonian) monodromy and the Chern class on the topological level as well as the Lagrangian class on the symplectic level. For us, monodromy will play an essential role so that we will recall its definition here; for more detail see <cit.>. Let F : M → B be a Lagrangian torus fibration over an n-dimensional manifold B and denote by R ⊆ B the set of the regular values of F. Then there exists a natural covering ⋃_r ∈ R_1(F^-1(r)) → R, where _1(F^-1(r)) is the first homology group of F^-1(r) with integer coefficients. Because of this, there is a natural representation of π_1(R) into the group SL(n, ) of automorphisms of the lattice _1(F^-1(r)) ≃ℤ^n. This representation is called the Hamiltonian monodromy of F : M → B (or of F : M → R). Thus, to any loop γ in R, one can assign an n× n integer matrix called the monodromy or the monodromy matrix along γ. Note that Lagrangian torus fibrations are allowed to have singular points and these are precisely the points that encode essential properties of the underlying integrable system. One has in particular been interested in non-degenerate singular points, i.e. points for which the Hessians of the integrals span a Cartan subalgebra in the real symplectic Lie algebra sp(2n, ) (cf. Bolsinov and Fomenko <cit.>). Locally one can describe such singularities by local normal forms (cf., among other, the works by Eliasson <cit.>, Miranda and Zung <cit.>, and and Wacheux <cit.>): in a neighbourhood U of a non-degenerate singular point, one can find local symplectic coordinates (x_1, …, x_n, ξ_1, …, ξ_n) such that the symplectic form takes the form ω = ∑_i=1^n dx_i∧ dξ_i in U, and n functionally independent smooth integrals q_1, …, q_n : U → Poisson commuting with all f_1, …, f_n such that q_i is one of the following possible components: * regular component: q_i = x_i, * elliptic component: q_i = 1/2(x_i^2 + ξ_i^2), * hyperbolic component: q_i = x_iξ_i, * focus-focus components (exist in pairs): q_i = x_iξ_i + x_i+1ξ_i+1 and q_i+1 = x_iξ_i+1 - x_i+1ξ_i. We will eventually focus on 4-dimensional integrable systems. In that case, the following six different types of non-degenerate singular points can occur: * rank 0: elliptic-elliptic, hyberbolic-hyperbolic, elliptic-hyperbolic and focus-focus, * rank 1: elliptic-regular and hyperbolic-regular. Williamson <cit.> (see also Bolsinov and Fomenko <cit.>) showed that to determine the type of a non-degenerate rank 0 singular point of a 4-dimensional integrable system (M, ω, F=(f_1, f_2)), it is sufficient to find the eigenvalues for the Hessian of the linear combination c_1 f_1 + c_2 f_2 for generic c_1, c_2 ∈ at this singular point since * elliptic components have pairs of purely imaginary eigenvalues, * hyperbolic components have pairs of purely real eigenvalues, * focus-focus components have quadruples of complex eigenvalues with non-zero real- and imaginary parts. Note also that, if λ is an eigenvalue of multiplicity k, then so are -λ, λ, and -λ (cf. van der Meer <cit.>). Concerning monodromy, we note that if Λ is a (compact) leaf containing n singular points of which all are of focus-focus type, then it has been shown that the monodromy around Λ is given by M = [ 1 n; 0 1 ], see the works by Matsumoto <cit.>, Lerman and Umanskii <cit.>, Matveev <cit.>, and Zung <cit.>. This result will be drawn on again in our discussion of semitoric and hypersemitoric systems. §.§ Toric systems Let us start with the `easiest' class of integrable systems: Let (M,ω,F) be an integrable system with M compact and connected. If all integrals of (M,ω,F) generate an effective S^1-action, then the system is said to be a toric system. Atiyah <cit.> and Guillemin and Sternberg <cit.> showed that the image of the momentum map of a toric system is a convex polytope, called the momentum polytope. Later, Delzant <cit.> showed that toric systems are classified up to isomorphism by their momentum polytope. Delzant's classification was then extended to non-compact manifolds by Karshon and Lerman <cit.>. Note that the singular points of a toric system are all non-degenerate and only contain components of elliptic or regular type. §.§ Semitoric systems Delzant's <cit.> classification of toric manifolds has been generalized by Pelayo and Vũ Ngọc <cit.> together with Palmer and Pelayo and Tang <cit.> to the following class of integrable systems, called “semitoric systems”. Semitoric systems are a natural class of systems, generalizing toric systems by relaxing the assumption of periodicity on one of the integrals defining the system. Semitoric systems are closely related to so called almost-toric system, see for instance Symington <cit.> and Vũ Ngọc <cit.>. The notion “semitoric” is natural, and has been used in different contexts, including symplectic geometry of Hamiltonian torus action by Karshon and Tolman <cit.>, integrable systems Vũ Ngọc <cit.> and Pelayo and Vũ Ngọc <cit.>, partially equivariant embedding problems in toric geometry by Pelayo <cit.>, and mathematical physics by Martini and Taylor <cit.>. We refer to Pelayo <cit.> for further discussion and references. Let (M, ω, F=(J,H)) be a 4-dimensional integrable system, where M is connected. Then (M, ω, F=(J,H)) is a semitoric system if * J is proper and generates an effective S^1-action, * F has only non-degenerate singularities (if any) and none of them admit hyperbolic components. Note that, under the assumptions of Definition <ref>, Vũ Ngọc <cit.> showed that the fibres of F are connected, thus generalizing the connectivity statement from the toric case as shown by Atiyah <cit.> and Guillemin and Sternberg <cit.>. The main difference between toric and semitoric systems is the possible appearance of focus-focus singular points. Note that if c ∈ F(M) is a focus-focus singular value, then its preimage F^-1(c) has the shape of a so-called pinched torus where the number of pinches equals the number of focus-focus points in the fibre, cf. for instance Bolsinov and Fomenko <cit.>. Vũ Ngọc <cit.> showed that one can associate an equivalence class of polygons with the image of the momentum map of a semitoric system. But unlike to the toric case, this is not enough to classify semitoric systems. Pelayo and Vũ Ngọc <cit.> were able to classify so-called simple semitoric systems, i.e. semitoric systems for which each fibre of J contains at most one focus-focus point, by formulating the following five invariants: * the number of focus-focus points, * the Taylor series or singularity type invariant, * the polygon invariant, * the height invariant, and * the twisting index invariant. Palmer, Pelayo and Tang <cit.> extended the result to the non-simple case, building on the symplectic classification of multi-pinched focus-focus fibres by Pelayo and Tang <cit.>. The five invariants will be discussed further in Section <ref>, where also two examples will be covered, namely the coupled angular momenta (Section <ref>), and an example for which the polygon takes the shape of an octagon (Section <ref>). Other important examples of semitoric systems are the spherical pendulum (cf. Dullin <cit.>) and the Jaynes-Cummings model (cf. Babelon, Cantini and Douçot <cit.>, Pelayo and Vũ Ngọc <cit.>, and Alonso, Dullin and Hohloch <cit.>). §.§ Hypersemitoric systems Hohloch and Palmer <cit.> considered a yet more general class of integrable systems than semitoric systems by allowing for singular points with hyperbolic components and certain degenerate singular points, namely so-called parabolic singular points: a singular point p of an integrable system (M, ω, F=(f_1,f_2)) is parabolic if there exists a neighbourhood U ⊂ M of p with (generally non-canonical) coordinates (x, y, λ, ϕ) and functions q_i = q_i(f_1,f_2) for i ∈{ 1,2} of the form q_1 = x^2 - y^3 + λ y q_2 = λ. A coordinate free definition is given in Bolsinov, Guglielmi and Kudryavtseva <cit.>. Note that the same normal form in fact applies to parabolic orbits, which means that from the smooth point of view, there is only one type of degenerate singularities appearing in hypersemitoric systems (for more details, see Kudryavtseva and Martynchuk <cit.>). Parabolic points are also known under the name of cusps or cuspidal points. Moreover, parabolic points naturally appear as transition points between (families of) elliptic-regular and hyperbolic-regular points. The following definition generalizes the natural notions of toric and semitoric systems we have seen earlier in this paper, and appears in recent work by Hohloch and Palmer <cit.>, following also work by Kalashnikov <cit.> as explained below. A 4-dimensional integrable system (M, ω, F=(J,H)) is called hypersemitoric if * J is proper and generates an effective S^1-action, * all degenerate singular points of F (if any) are of parabolic type. Note that the existence of a global S^1-action prevents the appearance of hyperbolic-hyperbolic singularities in a hypersemitoric system. The original motivation for introducing this class, however, comes from the result of Hohloch and Palmer <cit.> stating that any 4-dimensional Hamiltonian system X_J which generates an effective S^1-action is extendable to a hypersemitoric system (M, ω, (J,H)). Furthermore, the set of hypersemitoric systems is open in the set of 4-dimensional integrable systems with a global effective Hamiltonian circle action (see Kalashnikov <cit.>). Dullin and Pelayo <cit.> showed that, starting with a semitoric system, one can use a subcritical Hamiltonian-Hopf bifurcation (which transforms a focus-focus point to an elliptic-elliptic point, see Sections <ref> and <ref>) to generate a flap (see Section <ref>) on said system, thus creating a hyperbolic semitoric system (cf. <cit.>). Although the name of this type of system is very similar to the name hypersemitoric, they are defined differently. Hyperbolic semitoric systems requires the same conditions as hypersemitoric systems for the integral J generating a circle action. However, the set of hyperbolic singularities in hyperbolic semitoric systems are required to be non-empty, and the set of degenerate singularities is required to be isolated, not necessarily of parabolic type. Nevertheless, many hypersemitoric systems can thus be generated by performing subcritical Hamiltonian-Hopf bifurcations, together with so-called blow-ups (also known as corner chops, see for instance Holoch and Palmer <cit.> and references therein) on the (newly generated) elliptic-elliptic points. §.§ Topological invariants: atoms and molecules Finally, we will recall a complete topological invariant for a generic isoenergy level of a two degree of freedom integrable system which was introduced by Fomenko and Zieschang <cit.>. This invariant is intimately linked to hyperbolic-regular and elliptic-regular points and naturally appears in (hyper)semitoric systems as well as in systems without a global S^1-action, which in fact form a majority of known integrable systems (including the Kovalevskaya top and many other integrable cases in rigid body dynamics, various geodesic flows, billiards, etc.). We will follow the presentation of Bolsinov and Fomenko <cit.>. Let f be a Morse function on a manifold M. Note that the leaves of f foliate the manifold. Let x ∼ y if and only if x and y are in the same leave of f and denote by Γ := M / ∼ the space of leaves of f. Since f is a Morse function Γ is in fact a graph, called the Reeb graph of f on M where singular leaves give rise to the vertices. There are two types of vertices: * a vertex is called an end vertex if it is the end of one edge only, * otherwise it is called an interior vertex. Note that the end vertices of a Reeb graph correspond to local minima and maxima (thus elliptic points) of the Morse function, whilst the interior vertices correspond to saddle-points (thus hyperbolic points). Let f M →ℝ be a Morse function on a 2-dimensional surface M. An atom is a tubular neighbourhood denoted by P^2 of a singular fibre f^-1(c) together with the fibration f P^2 →ℝ on this neighbourhood. The atom is orientable if the surface P^2 is orientable and non-orientable otherwise. We now give a brief overview of the so-called simple atoms, which are atoms whose singular fibres contain only one singular point and which are referred to as atom A, atom B and atom B. There exist many more atoms, which are defined similarly to the aforementioned ones. A more detailed exposition can be found in Bolsinov and Fomenko <cit.>. Let us first consider atom A, which represents the case of local minima or maxima of the function f. The Reeb graph of the atom is a line segment illustrating the energy levels of f together with an arrow pointing in the direction of increasing energy, and a symbol A illustrating the extrema. Thus, there exist two atoms of type A of which the associated Reeb graphs are sketched in Figure <ref>. One can do a similar construction for saddles. Note, however, that there exist both orientable and non-orientable saddles, and they lead to atoms of type B and B, respectively. One can generate such atoms by considering a cylinder and gluing a strip to one of its ends (more specifically, attaching an index-1 handle). If the strip is not twisted, this can be deformed to an orientable saddle, whilst if it is twisted, it can be deformed to a non-orientable saddle. Figure <ref> shows the Reeb graphs of these atoms. There also exist atoms with more than one singular point in the singular fibre (cf. Bolsinov and Fomenko <cit.>). However, these atoms still form two main types: the first type consists only of atoms A, whilst the second type consists of all other atoms (which are in fact saddle atoms). Let now (M, ω, (H,f)) be an integrable system on a symplectic 4-manifold M and let Q = {x ∈ M | H(x) = constant} be a `generic' so-called isoenergy 3-surface (see Bolsinov and Fomenko <cit.> for the exact conditions on Q). Let Q/∼ be the space of leaves, which can also be pictured as a (Reeb) graph where the vertices correspond to the singular leaves. Now, the singular leaves correspond to so-called 3-atoms, which are defined similarly to the atoms we saw before, but now the neighbourhoods are 3-dimensional. It turns out that these 3-atoms are in one-to-one correspondence with the set of 2-atoms possibly endowed with a finite number of marked points or stars – corresponding to exceptional fibres of the Seifert fibration naturally associated to a 3-atom, see Bolsinov and Fomenko <cit.>. For simplicity, 2-atoms with stars will also be referred to as 2-atoms. Thus, we will consider the graph defined by Q/∼ with the vertices corresponding to 2-atoms. This graph is called the molecule of (M, ω, (H,f)) on Q. A molecule contains a lot of information of the foliation of the isoenergy surface Q. But this type of molecule consists of atoms glued together so far without the knowledge of how this gluing is performed. Keeping track of the gluing gives us the final piece of information that we need to give a molecule the meaning of an invariant: the gluing is performed by the so-called gluing matrix C_i = [ α_i β_i; γ_i δ_i ]∈(2, ℤ), C = -1. To the gluing matrix C_i, there are two invariants assigned, namely r_i := α_i/β_i 1 β_i≠ 0, ∞ β_i = 0 and ϵ_i := sign β_i β_i≠ 0, sign α_i β_i = 0. These two invariants alone are not enough for our purposes, and so one more invariant has to be introduced. An edge e_i of a molecule W is called infinite, if r_i = ∞, and otherwise finite. Cutting the molecule along finite edges splits it into several connected components. The components not containing any atoms of type A are called families. Let U_k be a family. Recall that the edges of atoms are `oriented' by arrows. An edge in U_k is said to be outgoing if the arrow points from a vertex inside U_k to a vertex outside U_k. In the opposite case an edge in U_k is called incoming. If the edge joins a vertex inside U_k to another vertex inside U_k, then the edge is called interior. To each edge e_i in U_k we assign the following integer: Θ_i: = ⌊α_i/β_i⌋, e_i is an outgoing edge, ⌊-δ_i/β_i⌋, e_i is an incoming edge, -γ_i/α_i, e_i is an interior edge. With this, we construct the third, and final, invariant we want to associate to W, namely n_k := ∑_e_i∈ U_kΘ_i∈. The invariants r_i, ϵ_i and n_k will be called marks. One can now endow the molecule W with the three marks defined above, and define the marked molecule as the quadruple W^* := (W, r_i, ϵ_i, n_k). Fomenko and Zieschang <cit.> showed that two integrable systems on generic isoenergy 3-surfaces are Liouville equivalent if and only if their marked molecules coincide. Marked molecules are also known as Fomenko-Zieschang invariants. The collection of such marked molecules can be thought of as a topological portrait of the system, which contains more information than for example the topological types of the individual singular leaves/fibres. Since hypersemitoric systems only contain elliptic, hyperbolic-regular, focus-focus and parabolic points, but no hyperbolic-hyperbolic ones, one can show that marked loop molecules form complete local topological invariants of the torus fibration of a hypersemitoric system. In other words, the loop molecules around a given singularity of the hypersemitotic system determine its topological type. Note that the same is not true for general hyperbolic-hyperbolic singularities of integrable 2 degree of freedom systems; see Bolsinov and Oshemkov <cit.>. § SEMITORIC SYSTEMS In this section, we will briefly recall the construction of the five invariants of semitoric systems introduced by Pelayo and Vũ Ngọc <cit.> and its generalizations, then observe transitions from toric to semitoric systems by creating focus-focus points, and eventually consider some explicit examples. Two semitoric systems (M_1,ω_1,(J_1,H_1)) and (M_2,ω_2,(J_2,H_2)) are said to be isomorphic if there exists a symplectomorphism φ : M_1→ M_2 such that φ^*(J_2,H_2) = (J_1,f(J_1,H_1)) for some smooth function f such that ∂ f/∂ H_1 > 0. Since semitoric systems always come with a smooth, globally defined action J, this definition is basically saying that two semitoric systems are equivalent if and only if the corresponding Lagrangian fibrations are fibrewise symplectomorphic (up to possibly changing J to ± J +). Pelayo and Vũ Ngọc <cit.> showed that two simple semitoric systems are isomorphic if and only if all five invariants (defined below) are equal for the two systems. The simplicity assumption has been removed from the classification by Palmer, Pelayo and Tang <cit.>, but the invariants in the non-simple case are more complicated, and we do not present them here. §.§ The five semitoric invariants Let (M, ω, F=(J,H)) be a simple semitoric system. We will use the identification S^1 = /2π in what follows. Let us now explain each of the five invariants in more detail. §.§.§ Number of focus-focus points Vũ Ngọc <cit.> proved that M has a finite number of focus-focus singular points. Denoting this number by n_FF, one has thus 0 ≤ n_FF < ∞. Then n_FF forms an invariant for semitoric systems (cf. Pelayo and Vũ Ngọc <cit.>). §.§.§ Taylor series invariant Denote the focus-focus points of (M, ω, F=(J,H)) by m_i for 1 ≤ i ≤ n_FF. Let us now consider one focus-focus point, and denote it by m without the index, to simplify the notation. Recall from Section <ref> that there exists a neighbourhood U of m with symplectic coordinates (x,y,ξ,η) such that the quadratic parts of J and H span a Cartan subalgebra with the following basis: q_1 = xξ + yη, q_2 = xη - yξ. Note that the Hamiltonian flow generated by q_2 is 2π-periodic. We now follow the exposition in Vũ Ngọc <cit.>: Let Λ_z = F^-1(z) be a regular fibre near the singular fibre containing m. For any point A ∈Λ_z, denote by τ_1(z) the first return time of the flow generated by X_H to the X_J-orbit through A, and let τ_2(z) ∈/2π be the time it takes to close up this trajectory under the flow of X_J. Vũ Ngọc <cit.> showed that, for some determination of the complex logarithm ln z, then σ_1(z) := τ_1(z) + (ln z), σ_2(z) := τ_2(z) - (ln z) extends to smooth and single-valued functions in a neighbourhood of c = F(m). Moreover, σ := σ_1 dz_1 + σ_2 dz_2 yields a closed 1-form under the identification z=(z_1, z_2) ∈^2. Define S via dS = σ and S(c) = 0 and denote the Taylor series of S at z = c by (S)^∞. The Taylor series invariant, for all focus-focus points m_i, 1 ≤ i ≤ n_FF, is then given by the n_FF-tuple ((S_i)^∞)_i=1^n_FF. There is another way to define the Taylor series invariant. Let γ_z^1 and γ_z^2 be a basis of the first homology group of the torus Λ_z that varies smoothly with the base point z such that γ_z^1 is a representative of the cycle corresponding to the (periodic) flow of J and γ_z^2 represents a homology cycle obtained by first moving with the flow of X_H using time τ_1(z) and then with the flow of X_J using time τ_2(z). Now consider the action integral 𝒜(z) := ∫_γ_z^2α, where α is a primitive of ω on some neighbourhood of Λ_z. Then one finds for z≃(z_1,z_2) ∈^2 d𝒜(z) = τ_1(z) dz_1 + τ_2(z) dz_2. One can in fact interpret S as a regularised action integral via S(z) = 𝒜(z) - 𝒜(c) + (z ln z - z). Note that the above construction involves a certain number of choices which have to be made compatibly with the construction of the polygon invariant and the twisting index invariant below. The exact dependencies are explained in detail in the forthcoming article by Alonso, Hohloch, and Palmer <cit.>. §.§.§ Polygon invariant Let m_1, …, m_n_FF be the focus-focus points and denote by c_1:=F(m_1), …, c_n_FF:= F(m_n_FF) their values ordered such that the first coordinate of the focus-focus values increases. Denote by B := F(M) the image of the momentum map. Vũ Ngọc <cit.> showed that the set B_r ⊆ F(M) of regular values of F coincides with the set int B ∖{c_1, …, c_n_FF}. One can render B_r simply connected by making a vertical cut from each focus-focus value c_i either upwards or downwards to the boundary of F(M). By the Arnol'd-Liouville theorem, the momentum map induces an integral affine structure on B (which in general does not agree with the one induced by the inclusion of B into ^2). Recall that affine transformations leaving a vertical line invariant arise from vertical translations composed with a matrix of the form T^k := [ 1 0; k 1 ] with k ∈. Now denote by l_i⊂^2 the vertical line through the focus-focus singular value c_i∈^2. This line splits ^2 into two half-spaces. For k ∈, let t_l_i^k : ^2→^2 be the map that leaves the left half-space invariant and shears the right half-space by T^k. We accommodate now all focus-focus singular values by setting 𝐤 := (k_1, …, k_n_FF) and defining t_𝐤 := t_l_1^k_1∘…∘ t_l_n_FF^k_n_FF. For each 1 ≤ i ≤ n_FF, let ϵ_i∈{-1, +1}, and denote by l_i^ϵ_i the vertical half line starting at c_i, going upwards if ϵ_i = +1, and downwards if ϵ_i = -1, and let l^ϵ := l_1^ϵ_1∪ … ∪ l_n_FF^ϵ_n_FF be the union of the lines running through all focus-focus values for a choice of ϵ := (ϵ_1, … , ϵ_n_FF). Then the set B ∖ l^ϵ is simply connected for all possible choices of ϵ_i. Vũ Ngọc <cit.> showed that there exists a homeomorphism f:=f_ϵ : B →^2 depending on the choices of ϵ and preserving J such that f(B) is a rational convex polygon. Restricted to B∖ l^ϵ, the homeomorphism f becomes a diffeomorphism onto its image which sends the integral affine structure of B_r ∖ l^ϵ to the integral affine structure of ^2. The map μ := f ∘ F is called a generalized toric momentum map for (M, ω, F=(J,H)) (cf. Pelayo and Vũ Ngọc <cit.>). In order to turn the polygon Δ := μ(M) into an invariant of the underlying semitoric system one needs to get rid of the choices involved in the construction of Δ. This is done by means of a group action: consider the group 𝒢 := {T^k| k ∈} and the action of the group {-1, +1}^n_FF×𝒢 on (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF) given by ((ϵ'_i)_i=1^n_FF, T^k) ·(Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF) := (t_𝐮(T^k(Δ)), (l_i)_i=1^n_FF, (ϵ'_iϵ_i)_i=1^n_FF) where 𝐮 = ((ϵ_i- ϵ'_i)/2)_i=1^n_FF. Then the polygon invariant is the orbit of (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF) under the above action (cf. Pelayo and Vũ Ngọc <cit.>). §.§.§ Height invariant For i ∈{1, …, n_FF}, consider the focus-focus singular points m_i and their images c_i := F(m_i) and let μ and Δ be as in Section <ref>. The height (or the volume) invariant, as introduced by Pelayo and Vũ Ngọc <cit.>, is given by the n_FF-tuple (h_1, …, h_n_FF) with h_i := pr_2(μ(m_i)) - min_s ∈ l_i∩Δpr_2(s), where pr_2 : ^2→ is the projection onto the second coordinate (in <cit.> it is explained how this height invariant corresponds to the volume of certain submanifolds, and hence it is sometimes called the volume invariant). The function h_i thus measures the distance between the focus-focus value in the polygon Δ=μ(M) and its lower boundary. Furthermore, h_i is independent of the choice of the generalized toric momentum map μ, since it can also be seen as the symplectic volume of certain level sets. §.§.§ Twisting index invariant Let U_i be a neighbourhood of a focus-focus singular point m_i∈ F^-1(c_i), and let V_i = F(U_i). Vũ Ngọc and Wacheux <cit.> showed that there exists a local symplectomorphism Ψ : (^4, ω_0) → (M, ω) sending the origin to m_i, and a local diffeomorphism G : ^2→^2 sending 0 to F(m_i) such that F ∘Ψ = G ∘ q_i, where q_i = (q_i^1, q_i^2) is given by (<ref>). Recall that q_i^2 generates a circle action, so it must correspond to J. If necessary, after composing Ψ with either/both of the canonical transformations (x, ξ) ↦ (-x, -ξ) and (x, y, ξ, η) ↦ (-ξ, -η, x, y), one finds that G is of the form G(q_i^1, q_i^2) = (q_i^2, G_2(q_i^1, q_i^2)), where [G_2]q_i^1(0) > 0. We will extend G_2(q_i^1, q_i^2) to another Hamiltonian function G_2(H, J), such that they are equal at their restriction to U_i. Here (H, J) is a new momentum map for the semitoric system, and G_2 : ^2→ is some function to be discussed further below. Recall the action integral introduced in the construction of the Taylor series invariant (see Subsection <ref>): 𝒜_i(z) := ∫_γ_i, z^2α. Let G_i(z) := 𝒜_i(z) - 𝒜_i(c_i) for i = 1, …, n_FF. Observe that G_i(0) is well defined and equal to zero since the actions 𝒜_i(z) are given by integrating a primitive 1-form over a loop on a Lagrangian torus Λ_z. Note that this could also have been seen by using the regularised action in (<ref>). Now, let us define the Hamiltonian function via H_i, p := G_i(J, H). Then lim_m → m_i H_i, p = 0. Note also that, by (<ref>), we get a Hamiltonian vector field X_i, p = (τ_i^1∘ F) X_J + (τ_i^2∘ F) X_H. This was discussed by Pelayo and Vũ Ngọc <cit.>. They called the momentum map ν := (J, H_i, p) the privileged momentum map for F = (J, H). Now, let μ be a generalized toric momentum map. As μ preserves J, its components satisfy (μ_1, μ_2) = (J, μ_2). As μ_i, J and H_i,p are all action variables, there exists an invertible matrix A ∈GL(2, ) such that (X_J, X_μ_2) = A(X_J, X_i, p). The matrix has to be of the form A = [ 1 0; k_i 1 ], hence X_μ_2 = k_i X_J + X_i, p. Pelayo and Vũ Ngọc <cit.> showed that k_i does not depend on X_i, p or G_i. The integer k_i is called the twisting index. Note that, if k_i is the twisting index of m_i, then locally μ = T^k_iν. Also, if the polygon is transformed by some T^r, then ν does not change, whilst μ→ T^rμ. Note that the twisting index depends on the polygon Δ. To introduce an actual invariant, similarly to Subsection <ref>, we consider the orbit of (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF, (k_i)_i=1^n_FF) under the action of {-1, +1}^n_FF×𝒢. Specifically, with 𝐮 := (u_i)_i=1^n_FF := ((ϵ_i-ϵ_iϵ'_i)/2)_i=1^n_FF, the action is given by ((ϵ'_i)_i=1^n_FF, T^k) · (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF, (k_i)_i=1^n_FF) = (t_𝐮(T^k(Δ)), (l_i)_i=1^n_FF, (ϵ'_iϵ_i)_i=1^n_FF, (k + k_i + ∑_j=1^ĩ_i u_j)_i=1^n_FF) where we set 0=:∑_j=1^0 u_j and where ĩ_i =i or ĩ_i= i-1 depending on the choice of certain conventions. This orbit is called the twisting index invariant (cf. Pelayo and Vũ Ngọc <cit.>). Note that the above formula differs slightly from the original one given in Pelayo and Vũ Ngọc <cit.> by the extra term ∑_j=1^ĩ_i u_j. This term accounts for the way in which changing cut directions affects the twisting index. Its absence in the original formula was pointed out to us by Yohann Le Floch and Joseph Palmer (for a detailed discussion, we refer to the forthcoming paper by Alonso, Hohloch, and Palmer <cit.>). §.§ Modifications and generalizations of the five invariants In fact, all five invariants are intimately related, and there is no need to consider them separately. Le Floch and Palmer <cit.> took three of the five semitoric invariants — the number of focus-focus points, the polygon invariant, and the height invariant — and joined them together to form a single invariant, called the marked semitoric polygon invariant. When Palmer, Pelayo and Tang <cit.> extended the classification to non-simple semitoric systems they gathered all five invariants into one big invariant, called the complete semitoric invariant. §.§ Supercritical Hamiltonian-Hopf bifurcation If one perturbs a toric system, one may obtain a semitoric system, in particular if an elliptic-elliptic point is transformed into a focus-focus point. Such a transformation is called a supercritical Hamiltonian-Hopf bifurcation. In coordinate form, it can more specifically be defined as follows (see in particular Equation (<ref>) below with a >0). Let 𝔊 be a Lie group acting on the space of smooth real-valued functions C^∞(^n) whose action is defined by g · f(x) = f(g^-1(x)) for g ∈𝔊, f ∈ C^∞(^n) and x ∈^n. Furthermore, let [x] denote the space of polynomials on ^n, and let [x]^𝔊 be the space of 𝔊-invariant polynomials. Hilbert showed that, if 𝔊 is compact, then there exist finitely many invariant polynomials ρ_i∈[x]^𝔊 for i = 1, …, k which generate [x]^𝔊 as an algebra (cf. van der Meer <cit.>). Such invariant polynomials ρ_i are called Hilbert generators. Let (x, y, ξ, η) be canonical coordinates on ^4 and define the following three Hilbert generators: J = x η - y ξ, X = 1/2(ξ^2 + η^2), and Y = 1/2(x^2 + y^2). When considering (hyper)semitoric systems, we will choose 𝔊 = S^1 to be given by the periodic Hamiltonian flow of X_J. Then van der Meer <cit.> showed that there exists the following equivariant normal form for a Hamiltonian-Hopf bifurcation Ĥ_s = J + X + s Y + a Y^2, where s, a ∈ are parameters with a ≠ 0, which we for simplicity take as a definition for this type of bifurcation. If a > 0 the bifurcation is called supercritical, and subcritical otherwise. Note that here the momentum map is given by (J, Ĥ_s). Recall that the singular points in a 2-degree of freedom toric system all have only elliptic and/or regular components. If we perturb one of the integrals of a 2-degree of freedom toric system as in the above normal form, then we can make one of the elliptic-elliptic singular points turn into a focus-focus point. On the level of eigenvalues, 4 purely imaginary eigenvalues at an elliptic-elliptic point collide when the bifurcation parameter attains the value s = 0 and then change into four complex eigenvalues (cf. van der Meer <cit.>). One can see two examples of supercritical Hamiltonian-Hopf bifurcations in Figure <ref> and Figure <ref>. The subcritical case, when the sign of a is negative, is treated in Section <ref>. §.§ Examples To compute the semitoric invariants explicitly for given systems has proven to be very difficult since it needs the combination of theoretical knowledge and strong computational skills. §.§.§ Coupled angular momenta system Consider the manifold M := S^2× S^2 and equip it with the symplectic form ω := - (R_1ω_S^2⊕ R_2ω_S^2) where ω_S^2 is the standard symplectic form on S^2 and R_1, R_2∈^>0. When Sadovskií and Zhilinskií <cit.> studied the so-called coupled angular momenta system, they found a focus-focus point and nontrivial monodromy. Since this system is both interesting from a physics point of view and not very complicated from a mathematical point of view, it recently became a popular subject to study. Le Floch and Pelayo <cit.> showed that the coupled angular momenta system on M, given in Cartesian coordinates by J(x_1,y_1,z_1,x_2,y_2,z_2) := R_1(z_1-1) + R_2(z_2+1), H(x_1,y_1,z_1,x_2,y_2,z_2) := (1-t)z_1 + t(x_1x_2 + y_1y_2 + z_1z_2), describes a semitoric system for all t ∈∖{t^-,t^+}, where t^± := R_2/2R_2 + R_1∓ 2√(R_1R_2). The system has four singular points of rank 0 which are located at the top and bottom of the spheres, i.e. when (z_1,z_2) = (± 1, ± 1). Three of the points are always elliptic-elliptic, whilst (1, -1) is a focus-focus point if t^- < t < t^+ and elliptic-elliptic if t < t^- or t > t^+. Thus, the number of focus-focus points invariant is 0 if (1, -1) is elliptic-elliptic, or 1 if (1, -1) is focus-focus. For some values of t, the moment image is plotted in Figure <ref>. Le Floch and Pelayo <cit.> computed, for certain parameter values, the first two terms of the Taylor series, the polygon, and the height invariant for this system. The full classification was achieved by Alonso, Dullin and Hohloch <cit.>. The semitoric invariants of the coupled angular momenta system are as follows: The number of focus-focus points is either zero or one, see above. The Taylor series invariant is of the form S(j,k) = j arctan( R_2^2(2t - 1) - R_1R_2(t + 1) + R_1^2t/(R_1 - R_2)R_1 r_A) + k ln( 4 R_1^5/2 r_A^3/R_2^3/2(1 - t) t^2) + j^2/16 R_1^4 R_2 r_A^3( R_2^4(2t - 1)^3 - R_1R_2^3(32t^3 - 46t^2 + 17t - 1) - 3R_1^2R_2^2t(4t^2 - 7t + 1) + R_1^3R_2(3 - 5t)^2 - R_1^4t_3) + jk(R_2 - R_1)/8R_1^3R_2r_A^3( R_2^2(2t - 1)^2 - 2R_1R_2t(6t - 1) + R_1^2t^2) + k^2/16R_1^4R_2r_A^3( R_2^4(2t - 1)^3 - R_1R_2^3(16t^3 - 42t^2 + 15t + 1) - R_1^2R_2^2t(28t^2 - 3t -3) + R_1^3R_2t^2(13t - 3) + R_1^4t^3) + 𝒪(3), where r_A = √((R_1^2 + 4R_2^2)(t - t^-)(t^+ - t)). The polygon and twisting index invariants are illustrated in Figure <ref>. Set R:= R_2/R_1. Then the height invariant of the coupled angular momenta is given by h = 2 min(R_1, R_2) + R_1/π t( r_A - 2 R t arctan( r_A/R - t) - 2 t arctan( r_A/R + t - 2 R t) ). §.§.§ The (semi)toric octagon system De Meulenaere and Hohloch <cit.> constructed a semitoric system with four focus-focus singular points. The system was created by first considering the octagon Δ obtained by chopping off the corners of the square [0, 3] × [0,3]. Since Δ turned out to be a Delzant polygon, Delzant's <cit.> construction could be used to construct a toric system which has Δ as image of the momentum map. This is done by means of symplectic reduction of ^8 (equipped with its standard symplectic structure) and yields a 4-dimensional, compact, connected, symplectic manifold (M_Δ, ω_Δ). A point on M_Δ is written as an equivalence class of the form [z] = [z_1, …, z_8] with z_i∈ for i = 1, …, 8. The (toric) momentum map F = (J, H):(M_Δ, ω_Δ) →^2 is given by J([z_1, …, z_8]) = 1/2z_1^2, H([z_1, …, z_8]) = 1/2z_3^2. Denote by the real part of a complex number. By perturbing H to H_t: = (1-2t) H + t γ( z̅_2z̅_3z̅_4z_6z_7z_8) for 0 < γ < 1/48, De Meulenaere and Hohloch <cit.> obtained a system with momentum map (J, H_t):(M_Δ, ω_Δ) →^2 that is toric for 0 ≤ t < t^-, semitoric for t^- < t < t^+, and toric again for t^+ < t ≤ 1, where t^- := 1/2(1 + 24 γ) and t^+ := 1/2(1 - 24 γ). Note that 0 < t^- < 1/2 and 1/2 < t^+ < 1. At t = 1/2, the system has two focus-focus fibres, each containing two focus-focus points, see Figure <ref>. The two fibres then have the shape of double pinched tori. Apart from one representative of the polygon invariant and the number of focus-focus point, no semitoric invariants have yet been calculated. §.§ State of the art concerning other semitoric systems Spread over the literature (cf. works by Babelon, Dullin, Le Floch, Pelayo, Vũ Ngọc, and others), there are various partial results concerning the computation of the semitoric invariants for certain parameter values for certain systems. For instance, a Taylor series type invariant has been calculated by Dullin <cit.> for the spherical pendulum (which is, strictly speaking, not a semitoric system due to lack of properness). Pelayo and Vũ Ngọc <cit.> computed the number of focus-focus points, the polygon, and the height invariant for the so-called coupled spin oscillator system. Alonso, Dullin and Hohloch <cit.> completed the set of semitoric invariants for this system by computing the Taylor series and twisting index invariant. Both of these systems have only one focus-focus point. Hohloch and Palmer <cit.> generalized the coupled angular momenta system to a family of semitoric systems with two focus-focus points. Alonso and Hohloch <cit.> computed the polygon and height invariant for a subfamily and Alonso, Hohloch and Palmer <cit.> are currently computing its twisting index invariant. Le Floch and Palmer <cit.> devised semitoric systems arising from Hirzebruch surfaces and computed their number of focus-focus points, the polygon invariant, and, for certain parameter values, also their height invariant. § HYPERSEMITORIC SYSTEMS In this section, we give a brief overview of existing and related results concerning hypersemitoric systems. Recall that, compared to semitoric systems, a hypersemitoric system (Definition <ref>) may in addition have singular points with hyperbolic components and degenerate singular points of parabolic type. §.§ Flaps and pleats/swallowtails Two possibilities of how hyperbolic-regular and parabolic points occur in hypersemitoric systems are so-called flaps and pleats/swallowtails. A good exposition with examples for pleats/swallowtails can be found in Efstathiou and Sugny <cit.>, and for flaps see Efstathiou and Giacobbe <cit.>. There are various ways to visualize flaps and pleats/swallowtails. Instead of using the image of the momentum map over which a hypersemitoric (or even more general) system gives rise to a singular fibration with possibly disconnected fibres, it makes sense to remember the branching and disconnectedness by working with the so-called bifurcation complex (also known as unfolded momentum domain). One can either identify it with the leaf space of a system (M, ω, F=(J, H)) or describe it directly as a stratified manifold V together with a map F̃: M → V and a projection τ: V →^2 such that τ∘F̃ = F and the regular level sets of F̃ correspond to the connected components of the level sets of F. We will summarize some of their findings. In the preimage under τ of a sufficiently small neighbourhood of a parabolic value, the bifurcation complex has two sheets: one sheet, the local base ℬ, contains regular values and a compact line segment ℒ of hyperbolic-regular values, and one sheet, the local flap ℱ, contains a line of elliptic-regular and of hyperbolic-regular values (which meet at a parabolic value) as well as regular values `between' these lines, see Figure <ref>. Both sheets intersect (or rather touch) each other along the line segment of hyperbolic-regular values including its parabolic end point. The topological boundary of ℱ consists of the line segments of elliptic-regular and hyperbolic-regular values joint at the parabolic value and a line of regular values, called the free boundary. Flaps and pleats/swallowtails now arise as follows: Consider a system with a compact line segment ℒ of hyperbolic-regular values with parabolic end points denoted by c_1 and c_2. For i ∈{ 1,2}, let ℬ_i be their local bases and ℱ_i their local flaps. If one glues the free boundary of ℱ_1 to the free boundary of ℱ_2, this will define a flap topology around ℒ, see Figure <ref>. If the free boundary of ℱ_1 is glued to the boundary of ℬ_2, and the free boundary of ℱ_2 is glued to the boundary of ℬ_1, this will define a pleat topology, see Figure <ref>. Efstathiou and Giacobbe <cit.> showed that the bifurcation complex in an open neighbourhood of ℒ can have either the pleat topology or the flap topology. Efstathiou and Giacobbe <cit.> proved another interesting result: Let p and q be coprime integers and let S^3 := { (z_1, z_2) ∈^2|z_1^2 + z_2^2 = 1 } be the unit sphere in ^2. Consider the (free) action of _p := /p on S^3 given by (z_1, z_2) ↦(exp(2 π i / p) z_1, exp(2 π i q / p) z_2). The lens space L(p,q) := S^3 / _p is the orbit space defined by this action. Then, with ℒ as above, the type of lens space L(p, 1) topologically embedded in F^-1(ℒ) determines the monodromy of the Lagrangian fibration in a neighbourhood of ℒ up to a sign determined by the choice of orientations. §.§ Subcritical Hamiltonian-Hopf bifurcations Recall from Section <ref>, that a semitoric system with focus-focus points may arise via supercritical Hamiltonian-Hopf bifurcations from a toric one. Analogously, a hypersemitoric system with flaps may arise from a semitoric one with focus-focus points via so-called subcritical Hamiltonian-Hopf bifurcations by `replacing' a focus-focus point by a (small) flap, see for instance Dullin and Pelayo <cit.>. To be more precise, recall the normal form Ĥ_s = J+ X + s Y + a Y^2 from Equation (<ref>): If the sign of a is negative, then a focus-focus point (four complex eigenvalues) will first turn into a degenerate point (two purely imaginary eigenvalues of multiplicity 2) and then will bifurcate into an elliptic-elliptic point (four purely imaginary eigenvalues) from the value of which, lying on a flap, two lines of elliptic-regular values emanate that connect the elliptic-elliptic value to the parabolic values (cf. Section <ref>). The parabolic values are connected by a line of hyperbolic-regular values. In Figure <ref>, an example of a semitoric system that went through a subcritical Hamiltonian-Hopf bifurcation is displayed. §.§ Atoms, molecules, and classifications Recall from Section <ref> the notion of a marked molecule W^*, which is a complete isoenergy invariant of a 2 degree of freedom integrable system. The topology caused by the lines of elliptic-regular and hyperbolic-regular values in flaps and pleats (swallowtails) can in particular be described by marked molecules. Here one can consider `loop molecules' (see Figure <ref>) around the parabolic values with B-atoms describing the bifurcation of one of the two lines emanating from the cusp and A-atoms the other bifurcation. The important result in this context is that the loop molecule around the cusp is uniquely defined and moreover `knows' what happens in its vicinity, in the sense that the loop molecule completely determines the topology of the corresponding singular torus fibration. This result directly follows from the fact that a single parabolic orbit (more precisely, the associated compact singular fiber, which has the form of a cuspidal torus) gives rise to only one singular torus fibration up to a fibrewise homeomorphism, see Efstathiou and Giacobbe <cit.>. We conjecture that more is true in fact and there is only one such torus fibration up to fibrewise diffeomorphisms, cf. Kudryavtseva and Martynchuk <cit.>. A similar topological result is known for elliptic-elliptic, elliptic-hyperbolic and focus-focus singularities of integrable systems on 4-manifolds, but not so for hyperbolic-hyperbolic singularities (having multiple hyperbolic-hyperbolic points on a singular fiber) which are in general not determined by their loop molecules only, see for instance <cit.>. Interestingly, in the smooth case, the fibrewise classification turns out to be different also in the case of focus-focus singularities (having multiple points on the same singular fibre), see Bolsinov and Izosimov <cit.>. The fibres of hypersemitoric systems will be classified by means of a `labeled graph' in the forthcoming paper by Gullentops and Hohloch <cit.> which extends the special case of hyperbolic-regular fibres studied in Gullentops' thesis <cit.>. §.§ Examples Hypersemitoric systems were first defined in Hohloch and Palmer <cit.> who gave several examples for this class of systems. There are more examples in the paper by Gullentops and Hohloch <cit.> and Gullentops' thesis <cit.>. §.§.§ Hypersemitoric coupled angular momenta system Let J and H be as in the (semitoric) coupled angular momenta system, as discussed in Section <ref>. We will now modify H, such that we instead consider the following: H̃(x_1,y_1,z_1,x_2,y_2,z_2) := H(x_1,y_1,z_1,x_2,y_2,z_2) + sz_1^2, with parameter s ∈. Then, it turns out that the image of the momentum map F̃ = (J,H̃), when the coupling parameter is t = 0.5 for which we always have a focus-focus value in the semitoric case (i.e. s = 0), we can generate flaps and pleats, see Figure <ref>. It turns out that the point p_1 = (0,0,1,0,0,-1) is of focus-focus type if s_p_1^- < s < s_p_1^+, where s_p_1^± = R_1± 2 √(R_1 R_2)/4R_2. If s < s_p_1^- or s > s_p_1^+, then p_1 is of elliptic-elliptic type. Numerics indicates that, if R_1 < R_2, for s < s_p_1^- a flap appears, and for some s > s_p_1^+, then a pleat appears. If s ∈{s_p_1^-,s_p_1^+}, then (0,0,1,0,0,-1) is a degenerate singularity. This can be shown by a similar procedure as in Le Floch and Pelayo <cit.>. Furthermore, the point p_2 = (0,0,-1,0,0,1) is a focus-focus point if s_p_2^- < s < s_p_2^+, where s_p_2^± = R_1± 2 √(R_1 R_2) + 2R_2/4R_2. When s < s_p_2^-, then F̃(p_2) is an elliptic-elliptic value on the boundary of the momentum map image. For some s > s_p_2^+ we have that F̃(p_2) is an elliptic-elliptic value which joins the pleat created by p_1, see Figure <ref>. §.§.§ The hypersemitoric octagon system A specific family of examples can be created by taking the toric octagon system constructed in De Meulenaere and Hohloch <cit.> and, instead of perturbing it only to a semitoric system (cf. Section <ref>), more perturbation terms can be added to obtain a family of hypersemitoric systems. To be more precise, let F=(J, H) be as in Section <ref> and modify H to H_t with t = (t_1, t_2, t_3, t_4) ∈^4 via setting H_t := (1 - 2t_1)H + ∑_i=1^4 t_iγ_i, with γ_1([z]) := 1/50( z̅_2z̅_3z̅_4z_6z_7z_8), γ_2([z]) := 1/50z_5^4z_4^4, γ_3([z]) := 1/50z_4^4z_7^4, γ_3([z]) := 1/50z_5^4z_7^4. Gullentops and Hohloch <cit.> proved the appearance of flaps and pleats/swallowtails and their collisions for certain values of the parameter t, see for example Figure <ref>. Moreover, they studied the shape and topology for hyperbolic-regular fibres in the system (J, H_t) and showed that, for fibres over a hyperbolic-regular value, not only double tori (`two tori stacked on top of each other' resp. a figure eight loop times S^1) are possible, but that the number of `tori stacked on top of each other' possibly appearing as fibre of a hyperbolic-regular value is bounded from above by 13.
http://arxiv.org/abs/2307.10213v1
20230714133328
Mitigating Bias in Conversations: A Hate Speech Classifier and Debiaser with Prompts
[ "Shaina Raza", "Chen Ding", "Deval Pandya" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Vector Institute of Artificial Intelligence Toronto ON Canada [email protected] Toronto Metropolitan University Toronto ON Canada [email protected] Vector Institute of Artificial Intelligence Toronto ON Canada [email protected] Discriminatory language and biases are often present in hate speech during conversations, which usually lead to negative impacts on targeted groups such as those based on race, gender, and religion. To tackle this issue, we propose an approach that involves a two-step process: first, detecting hate speech using a classifier, and then utilizing a debiasing component that generates less biased or unbiased alternatives through prompts. We evaluated our approach on a benchmark dataset and observed reduction in negativity due to hate speech comments. The proposed method contributes to the ongoing efforts to reduce biases in online discourse and promote a more inclusive and fair environment for communication. <ccs2012> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies Artificial intelligence</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317</concept_id> <concept_desc>Information systems Information retrieval</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Artificial intelligence [500]Information systems Information retrieval Mitigating Bias in Conversations: A Hate Speech Classifier and Debiaser with Prompts Deval Pandya ==================================================================================== § INTRODUCTION In the era of social media and online platforms, communication and idea exchange have reached at its peak. Despite many benefits, these platforms also facilitate the spread of hate speech and offensive language. Hate speech often contains biases, perpetuating stereotypes and discriminatory language, which exacerbates the negative impact of such content on different targeted groups (based on race, gender, religion) <cit.>. Addressing these biases is a crucial step towards developing unbiased text processing systems and fostering healthy online interactions. In this paper, we propose a debiasing technique that leverages language generation and in-context prompting <cit.> to minimize the influence of lexical biases. A prompt is an instruction usually consisting of a few words or sentences that provide context or constraints for the model to follow . The method works by first detecting hate speech using a classifier, then employing a debiasing component that generates less biased alternatives through incorporating context-aware prompts designed to reduce the presence of biased language patterns. We evaluate our approach on benchmark dataset and demonstrate its effectiveness in debiasing hate speech texts. The results show a classifier accuracy of 95% and debiasing accuracy of 89%, along with a notable reduction in negative sentiment within hate speech comments. This method contributes to the ongoing efforts to reduce biases in online discourses. § RELATED WORK Bias in language models and embeddings is a broad and subjective topic <cit.> . Research has identified gender bias in popular embeddings such as GloVe and Word2Vec <cit.> and quantified biases using the word embedding association test (WEAT) <cit.> <cit.>. Efforts have been made to reduce biases in Transformer-based language models like BERT and GPT-3 <cit.> and in conversational AI systems <cit.>. Hate speech refers to the use of derogatory, abusive or threatening language towards individuals or groups based on their race, ethnicity, gender, religion, sexual orientation or any other characteristic. Several studies have focused on developing machine learning models for detecting hate speech in text <cit.>. One work <cit.> presents a dataset, HOLISTIC BIAS, which consists of nearly 600 descriptor terms across 13 different demographic axes. The work demonstrates that this dataset is highly effective for measuring previously unmeasurable biases in token likelihoods and generations from language models, as well as in an offensiveness classifier. Another work <cit.> quantifies sentiment bias through individual and group fairness metrics <cit.> and proposes embedding and sentiment prediction-derived regularization on the language model’s latent representations. The role of individual neurons and attention heads in mediating gender bias across three datasets designed to gauge a model’s sensitivity to gender bias are also studied <cit.>. A related paper <cit.> describes metrics for measuring political bias in language generation and proposes a reinforcement learning framework for mitigating political biases in generated text. StereoSet <cit.>, a large-scale natural English dataset to measure stereotypical biases in four domains: gender, profession, race, and religion, is presented and it is shown that popular models like BERT, GPT-2, RoBERTa, and XLnet exhibit strong stereotypical biases. A novel approach to mitigate gender disparity in text generation by learning a fair model during knowledge distillation is presented in a study <cit.> and two modifications based on counterfactual role reversal are proposed—modifying teacher probabilities and augmenting the training set. A related work <cit.> shows that GPT-3, a state-of-the-art contextual language model, captures persistent Muslim-violence bias, demonstrating that it appears consistently and creatively in different uses of the model. These biases become even severe compared to biases about other religious groups. Prompt engineering has recently emerged as a promising approach to mitigate biases in language models <cit.> . In the context of hate speech detection, few-shot learning <cit.> can be useful in scenarios where there is limited labeled data available for a particular language or dialect. Recent studies <cit.> have shown promising results in using few-shot learning for hate speech detection, which is also a motivation. In this work, we use the OPT based model for debiasing but we introduce fairness aware prompts to achieve the goal of debiasing the texts. § METHODOLOGY §.§ Hate speech classifier We utilize the BERT <cit.> for building an efficient hate speech detection. The input to the BERT encoder consists of tokenized text sequences with special tokens [CLS] and [SEP] for classification and separation. The output from the BERT encoder is passed through a SoftMax layer to provide a probability distribution for the text being classified as hate speech or non-hate speech. The hate speech classifier is trained on a labeled dataset using binary cross-entropy loss as the objective function, which aims to minimize the difference between the predicted probability distribution and the true binary labels, encouraging the model to accurately classify hate speech and non-hate speech comments. §.§ Debiasing model We use the pre-trained OPT (OpenAI's Pre-trained Transformer) <cit.> model for debiasing hate speech classification. We employ the few-shot learning with prompts to further refine the debiasing process. Our approach to debiasing through prompts is inspired by similar works such as <cit.> , however, we make our own changes through task-specific examples. To incorporate the debiasing prompts into the OPT model, we provide the prompt, the input text, resulting in a new input sequence. The OPT model processes this combined sequence and generates the debiased output based on the contextualized embeddings of both the prompt and the input text. We adjust the temperature of the OPT model during inference to control the level of randomness and diversity in the generated text samples. This helps to mitigate biases while preserving the overall meaning of the input text. §.§ Pipeline Our pipeline stacks both the hate speech classification and debiasing models as follows: (i) pass the input text through the hate speech classifier to obtain the predicted probability of it being hateful or non-hateful, and (ii) pass the hateful text through the debiasing model using few-shot learning with prompts to generate the debiased text during the language generation task. This two-stage approach first classifies the hate content in the input text, and if it is deemed hateful by the model, then it debiases it. We show our pipeline approach in Figure 1. § EXPERIMENT AND RESULTS §.§ Dataset In this study, we utilize the Hate Speech Dataset from a white supremacy online community <cit.> . The dataset consists of a diverse range of text samples, including both overt and subtle instances of hate speech, posing significant challenges for automated classification and debiasing models. The dataset contains a total of 10,568 sentences, classified as conveying hate speech or not. The size of dataset, along with the average sentence length, word count, and vocabulary size for each class. * HATE: Sentences: 1,119, Sentence length: 20.39 ± 9.46, Word count: 24,867, Vocabulary: 4,148. * NOHATE: Sentences: 8,537, Sentence length: 15.15 ± 9.16, - Word count: 144,353, - Vocabulary: 13,154. In this study, we address the class imbalance problem in our dataset by using a hybrid approach combining under-sampling and over-sampling. We start by randomly removing instances from the 'NOHATE' class to balance the representation. After this, we augment the 'HATE' class byduplicating instances using a method like SMOTE. This ensures both classes are equally represented, improving our model's ability to learn from both hate speech and non-hate speech instances. §.§ Hyperparameters and Evaluation We fine-tuned the BERT model for hate speech classification to develop an efficient and accurate classifier. For the debiaser model, we use the GPT-2 -small model with 117M parameters. We tuned the model temperature (0.1 - 1.0) in our debiaser model to balance diversity in the generated text samples <cit.>. Additionally, we utilized few-shot learning with prompts to further refine the debiasing process. We used 5 and 10 examples per category in our few-shot learning experiments. For the classification task and to train the whole pipeline, we used 3 epochs, optimizing the parameters using the Adam optimizer with a learning rate of 5e-5, weight decay of 0.5 and an epsilon value of 1e-8. We searched a grid of hyperparameters, including batch sizes of 4, 8, and 16, 32, 64 . We limited the input sequences to 128 (subword) tokens and trained the model in batches of 16 (to avoid out-of-memory issues). We employ F1 score (by calculating the harmonic mean precision and recall) and accuracy. For training, we used Google Colab Pro, which provided access to an NVIDIA Tesla T4 GPU with 16 GB of memory. §.§ Performance Evaluation Table 1 presents the results of our proposed method for mitigating biases in hate speech classification. The classifier's performance is evaluated using F1-score metrics. Further, we introduce a novel measure - the bias score - to assess the effectiveness of our debiasing model. In this study, our classifier generates a quantifiable bias score for each text, reflecting the degree of hate speech bias. Each text is scored both before and after the debiasing process. A decrease in this bias score post-debiasing is indicative of successful bias mitigation. This metric provides us a way to numerically gauge the effectiveness of our proposed debiasing model. The results shows that the proposed method achieves the highest F1 score of about 95%, outperforming all other classification methods. Moreover, the debiasing performance of the proposed method with few-shot learning and prompts achieves an impressive F1 score of 89%, with a low standard deviation. This indicates that the proposed method can effectively mitigate biases in hate speech classification. To evaluate the performance of the debiaser model, we conducted experiments with zero-shot and then 5 and 10 prompts with examples, respectively. The results show that using few-shot learning with prompts improves the F1 score compared to zero-shot learning. This indicates that the model is better able to mitigate biases in the input text with the guidance of a few examples. This technique has the potential to enhance the performance of the debiasing model further with more diverse and relevant prompt examples, which could be explored in future work. Classification performance: Next, we show the performance of the classifier model on a sample of 100 examples and report the results in Figure 2. We observe in the Figure 2, the model has made 45 true positive predictions, 5 false positive predictions, and 35 true negative predictions. The false negative count is 15. The high true positive rate indicates that the model is correctly identifying a majority of the hate speech sentences, while the low false positive rate indicates that the model is not incorrectly labeling non-hate speech sentences as hate speech. However, the false negative count is 15, indicating that there is still room for improvement in correctly identifying all instances of hate speech. Since, these results are based on a sample of 100 instances and may not be representative of the model's performance on a larger dataset. Further evaluation and tuning of the model may be necessary to improve its overall performance. Debiasing Performance: To further measure the effectiveness of our debiasing method, we measure and compare a range of performance metrics before and after debiasing. We first train our classifier on the original, biased dataset, and calculate the bias score. We then apply our debiasing process to the data, train the classifier on the debiased dataset, and again calculate the bias score. The change in the bias score serves as an indication of the effectiveness of our debiasing method. As shown in Table 2, our debiasing method successfully reduced the bias score by 30%, from 65% to 35%. This indicates a significant reduction in the model's inclination to misclassify certain non-hateful speech as hateful. However, these improvements were accompanied by a slight decrease in hate speech detection accuracy, with the accuracy dropping from 85% to 83%. Additionally, there was a slight increase in the false negatives rate, which rose from 15% to 17%. These findings suggest that the debiasing caused slightly less accuracy in identifying the hateful speech. On the other hand, the false positives rate decreased from 20% to 12%, indicating an improvement in correctly identifying non-hateful speech . § CONCLUSION We propose a hate speech classifier and debiaser that employs prompts to generate less biased alternatives. Our approach first detects hate speech using a classifier and then utilizes a debiasing component that generates less or no biased alternatives. Our proposed approach has some limitations. One limitation is the size and quality of the available training data, which may not be sufficient to cover all possible scenarios. Another limitation is the need for fine-tuning of the language model, which may require additional resources and expertise. Furthermore, the effectiveness of our approach may be affected by the quality of the underlying language model used for generating alternative text. Nonetheless, further research is needed to address the complex issue of online hate speech and biased language comprehensively. ACM-Reference-Format
http://arxiv.org/abs/2307.03904v1
20230708052256
Long-range interacting Stark many-body probes with Super-Heisenberg precision
[ "Rozhin Yousefjani", "Xingjian He", "Abolfazl Bayat" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas", "cond-mat.str-el" ]
[email protected] Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610051, China Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610051, China [email protected] Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610051, China In contrast to interferometry-based quantum sensing, where interparticle interaction is detrimental, quantum many-body probes exploit such interactions to achieve quantum-enhanced sensitivity. In most of the studied quantum many-body probes, the interaction is considered to be short-ranged. Here, we investigate the impact of long-range interaction at various filling factors on the performance of Stark quantum probes for measuring a small gradient field. These probes harness the ground state Stark localization phase transition which happens at an infinitesimal gradient field as the system size increases. Our results show that while super-Heisenberg precision is always achievable in all ranges of interaction, the long-range interacting Stark probe reveals two distinct behaviors. First, by algebraically increasing the range of interaction, the localization power enhances and thus the sensitivity of the probe decreases. Second, as the interaction range becomes close to a fully connected graph its effective localization power disappears and thus the sensitivity of the probe starts to enhance again. The super-Heisenberg precision is achievable throughout the extended phase until the transition point and remains valid even when the state preparation time is incorporated in the resource analysis. As the probe enters the localized phase, the sensitivity decreases and its performance becomes size-independent, following a universal behavior. In addition, our analysis shows that lower filling factors lead to better precision for measuring weak gradient fields. Long-range interacting Stark many-body probes with Super-Heisenberg precision Abolfazl Bayat August 12, 2023 ============================================================================= § INTRODUCTION Quantum sensors can achieve unprecedented precision in measuring time <cit.>, electric <cit.>, magnetic <cit.>, and gravitational fields <cit.>, way beyond the capability of their classical counterparts. They can be manufactured in atomic scales and have found applications in a wide range of fields, from cosmology <cit.> to biology <cit.>. The precision of estimating an unknown parameter h, encoded in a quantum density matrix ρ(h), is fundamentally bounded by Cramér-Rao inequality as Δ h ≥ 1/√(Mℱ), where Δ h is the standard deviation that quantifies the accuracy of the estimation, M is the number of repetitions and ℱ is a positive quantity called Fisher information. The scaling of Fisher information with respect to sensing resources, such as the probe size L, is a figure of merit that can be used for comparing the precision of different sensors. Typically, Fisher information scales algebraically with the size of the resource, namely ℱ∝L^β. In the absence of quantum features, classical sensing at best results in β=1, known as the standard limit. Quantum sensors, however, can achieve super-linear scaling with β>1 through exploiting quantum features such as entanglement <cit.>. Originally, enhancement in precision has been discovered for a special form of entangled states, known as GHZ states <cit.>, which results in β=2 also known as Heisenberg limit <cit.>. Although there are several experimental demonstrations of GHZ-based quantum sensors <cit.>, their scalability is challenging due to the sensitivity of such delicate quantum states to decoherence. In addition, the interaction between particles in these probes is detrimental to their precision <cit.>. Strongly correlated many-body systems are very resourceful for realizing quantum technology tasks, such as sensing. These quantum probes, which harness the interaction between particles, are naturally scalable and expected to be more robust against decoherence. In particular, various forms of phase transitions in such systems have been used for achieving quantum-enhanced sensitivity, including first-order <cit.>, second-order <cit.>, Floquet <cit.>, dissipative <cit.>, time crystals <cit.>, topological <cit.>, many-body <cit.> and Stark localization <cit.> phase transitions. Other types of may-body probes profit from diverse measurement methods including adaptive <cit.>, continuous <cit.>, and sequential <cit.> measurements. Since most of the sensing proposals in many-body probes have been dedicated to short-range interactions, a key open problem is whether long-range interactions can provide more benefits for sensing tasks? Long-range interactions naturally arise in certain quantum devices, such as ion traps <cit.> and Rydberg atoms <cit.>. The nature of these interactions prevents the systematic study of their interesting physics and except for some models such as Lipshin-Meshkov-Glick (LMG) <cit.>, and long-range Kitaev chain <cit.>, the effect of long-range interaction on sensing precision remains almost untouched. Gradient field sensing is of major importance in various fields, including biological imaging <cit.> and gravitometry <cit.>. In the former, the ultra-precise sensing of a weak gradient magnetic field increases imaging resolution, enabling the visualization of smaller tumors for early cancer detection. In the latter, precise gravity measurement is essential for detection of gravitational waves <cit.>, investigating the equivalence principle <cit.>, obtaining the fine-structure <cit.> and measuring Newton’s gravitational constant <cit.>. Recently, we have shown that Stark probes can be exploited for measuring weak gradient fields with super-Heisenberg precision <cit.>, in which the scaling exponent β can be as large as β≅6. This sensor relies on Stark localization transition which could even happen in the presence of an infinitesimal gradient field in single- and multi-particle quantum systems. The effect of a longer range of interaction on this sensor has not yet been explored. Addressing this issue is essential since the physical platforms for experimental realization of Stark localization, including ion traps <cit.> and Rydberg atoms <cit.> are naturally governed by long-range interactions. In this paper, we systematically study the effects of long-range interaction on the sensing capability of Stark probes. We show that the strong super-Heisenberg scaling of the Stark probes persists even in the presence of long-range interaction and is achievable throughout the extended phase of the system until the transition point. Our results show that various range of interaction leaves distinct imprints on the scaling of the Fisher information. Making the interaction more long-ranged enhances the localization and, hence, decreases the value of the Fisher information and β. The localization effect disappears as the system gets closer to a fully connected graph and thus the sensitivity enhances again. The achievable super-Heisenberg scaling remains valid even when the state preparation time is taken into account in resource analysis. Moreover, we provide a comprehensive investigation of the critical properties of long-range Stark probes and establish a concrete relationship between critical exponents of the system through an extensive finite-size scaling analysis. Finally, we analyze the effect of filling factor (i.e., the number of excitations per site) on the sensing power of our Stark probes. While super-Heisenberg scaling is achievable for all studied filling factors, lower filling factors provide better precision. This paper is organized as follows. We start by presenting the tools for assessing a quantum probe in section <ref>. After introducing our long-range Stark many-body probe in section <ref>, we present the numerical results of sensing with the probe in the half-filling sector in section <ref>. In the subsections of section <ref>, the scaling behavior of the probe, its critical properties, and the resource analysis are studied. Section <ref> contains the analysis of the filling factor and the paper is summarized in section <ref>. § ULTIMATE PRECISION LIMIT In this section, we briefly review the implications of Cramér-Rao inequality for quantum sensing problems. In order to estimate an unknown parameter h encoded in a probe, described by density matrix ρ(h), one has to perform a measurement which is described by a set of projectors {Π_i}. Each measurement outcome appears with the probability p_i(h)=Tr[Π_iρ(h)]. For this classical probability distribution one can show that the Fisher information can be obtained from ℱ_C(h)=∑_i 1/p_i(h)(∂ p_i(h)/∂ h)^2, which is known as Classical Fisher Information (CFI). In order to get rid of the measurement dependence, one can maximize the CFI with respect to all possible measurements to obtain Quantum Fisher Information (QFI), namely ℱ_Q(h)=max_{Π_i}ℱ_C(h) <cit.>. By definition, the QFI is an upper bound for the CFI and thus is called ultimate precision limit for which the Cramér-Rao inequality is updated as Δ h≥1/√(M ℱ_C(h))≥1/√(M ℱ_Q(h)). While the maximization with respect to measurement in the definition of the QFI seems notoriously challenging, it has been shown that alternative methods can provide computationally friendly methods for calculating the QFI. In particular, it turns out that the QFI is related to a quantity called fidelity susceptibility χ(h) as ℱ_Q=4χ(h). The fidelity susceptibility is defined as χ(h) = 2 ( 1 - √(Tr[ρ(h)^1/2ρ(h+δ h)ρ(h)^1/2]) )δ h^2, with δ h being an infinitesimal variation in h. It has been shown that for systems that go through a second-order quantum phase transition, the fidelity susceptibility and, hence, QFI show non-analytic behavior in the vicinity of the critical point <cit.>. This reflects the tremendous sensitivity of the system with respect to the control parameter h which drives the system into the phase transition. In this paper, we rely on Eq. (<ref>) for investigating the sensing power of a Stark many-body probe with long-range interaction. § STARK MANY-BODY PROBE We consider a one-dimensional spin-1/2 chain of L sites that is affected by a gradient field h. While spin tunneling is restricted to nearest-neighbor sites, the interaction between particles is taken to be long-range which algebraically decays by exponent η>0. The Hamiltonian reads H(h) = J∑_i=1^L-1(σ_i^xσ_i+1^x+σ_i^yσ_i+1^y)+ ∑_i<j1|i-j|^ησ_i^zσ_j^z + h∑_i=1^L i σ_i^z, where J is the exchange coupling, σ_i^(x,y,z) are Pauli operators acting on site i, and h is the amplitude of the applied gradient field, which has to be estimated. By varying the power-law exponent η, one can smoothly interpolate between a fully connected graph (η=0) and a standard nearest-neighbor one-dimensional chain (η→∞). Inherently, many interactions are long-range. Coulomb and dipole-dipole interactions are notable examples of this interaction that can be modeled in certain quantum simulators, e.g., ion traps <cit.> and Rydberg atoms <cit.>. The Hamiltonian Eq. (<ref>) conserves the number of excitations in the z direction, namely [H,S_z]=0, where S_z=1/2∑_iσ_i^z. This implies that the Hamiltonian is block-diagonal with respect to the number of excitations N. Hence, each block can be described by a filling factor of n=N/L. Here, we focus on the sensing power of our probe assuming that the filling factor n is fixed and the probe is prepared in the lowest energy eigenstate of the relevant sector. Note that the true ground state of the Hamiltonian lies in the sector with n=0 (i.e., N=0 excitations). Nonetheless, throughout the paper, for the sake of convenience, we call the lowest eigenstate of the Hamiltonian for any given filling factor n the ground state which should not be mistaken by the true ground state of the Hamiltonian at filling factor n=0. Regardless of the range of interaction, by increasing the strength of the field h, the probe undergoes a quantum phase transition from an extended phase to a many-body localized one <cit.>. It is known that the many-body localization (MBL) transition occurs across the entire spectrum, in contrast to conventional quantum phase transition which occurs only at the ground state <cit.>. Detecting and characterizing the MBL transition across the whole spectrum usually rely on exact diagonalization which severely restricts the numerical simulations to small systems <cit.>. For analyzing the sensing power of a probe, one requires large system size behavior which is not accessible through exact diagonalization. Therefore, we exploit Matrix Product State (MPS) simulation <cit.> to capture the behavior of QFI in large system sizes. While this allows us to extract a precise scaling analysis, it comes with the price that we will be limited to the ground state in each filling factor and cannot analyze the sensing power of excited states. § SENSING AT HALF-FILLING SECTOR (N=1/2) We first focus on the half-filling sector of the Hamiltonian in which we have N=L/2 excitations. In Fig. <ref>(a), we plot ℱ_Q as a function of Stark field h/J for a probe of size L=30 with various choices of η. Several interesting features can be observed. First, by increasing h/J the QFI shows a dramatic change in its behavior from being almost constant in the extended phase to a decreasing function in the localized regime. During this transition, the QFI peaks at some h_max(η), which asymptotically converges to the transition point h_c in the thermodynamic limit <cit.>. Second, various η's leave distinct imprints on the QFI. By moving from a fully connected probe (η=0) to a nearest-neighbor one (η→∞), the peaks of the QFI first decrease and then show a revival behavior. This is because as η decreases (i.e., interaction becomes more long-range) each spin configuration induces a different Zeeman energy splitting at any given site. This effect is like random disorder potential, which helps the system to localize and thus reduces the QFI. The observed behavior continues until the system becomes close to a fully connected graph (for η∼ 0.1) in which all spin configurations induce almost the same energy splitting and thus the localization effect from off-resonant energy separations gradually disappears. Third, strong long-range interaction indeed enhances the sensitivity of the probe by providing the highest value of ℱ_Q in both the extended phase (i.e., h<h_max) and at the transition point (i.e., h=h_max). To explore the behavior of the QFI in the thermodynamic limit, namely for L→∞, one can study the QFI for various system sizes. In Figs. <ref>(b)-(d), we plot the ground state QFI as a function of Stark field h/J for various system sizes L and selected η=0,1 and 5, respectively. Regardless of the range of the interaction, by enlarging the probe size, the peak of the QFI increases and h_max gradually approaches zero, signaling the divergence of ℱ_Q in the thermodynamic limit for a vanishing transition point h_c→0. While the finite-size effect can be seen in the extended phase, in the localized regime one deals with a size-independent algebraic decay of the QFI which can be perfectly described by F_Q∝|h-h_max|^-α(η) (dashed lines). From Figs. <ref>(b)-(d), one can see that the exponent α takes the values α(η=0)=4.00, α(η=1)=4.94 and α(η=5)=3.97, respectively. §.§ Super-Heisenberg sensitivity To characterize the scaling of the QFI with the probe size, in Figs. <ref>(a) and (b), we plot ℱ_Q versus L for some values of η both at the transition point, i.e., h=h_max, and in the extended phase, i.e., h/J=10^-4, respectively. In both panels, the markers represent the QFI obtained by numerical simulation and the lines are the best fitting function of the form ℱ_Q(h,η)∝L^β(h,η). The best obtained exponent β(h,η) has been plotted as a function of η in Figs. <ref>(c) and (d), for h=h_max and h/J=10^-4, respectively. Some interesting observations can be highlighted. First, regardless of the interaction range η, one can obtain super-Heisenberg sensitivity for our probe (i.e., β>2) both at the transition point and in the extended regime. Second, as discussed before, by decreasing η (i.e., making interaction more long-range) the effective Zeeman energy splitting enhances the localization and thus reduces the QFI as well as the exponent β. As η further decreases, the probe becomes effectively fully connected, implying that all spin configurations induce equal energy splitting that does not contribute to the localization anymore. Therefore, β changes its behavior and starts rising as η decreases towards zero. §.§ Finite-size scaling analysis The observed trend of the QFI in Figs. <ref>(b)-(d) (shown with dashed lines) strongly implies the algebraic divergence of the QFI in the thermodynamic limit as ℱ_Q∝|h-h_max|^-α. For the sake of the abbreviation, we drop the dependency of the parameters on η and h. This behavior which is attributed to all second-order phase transitions in the thermodynamic limit is accompanied by the emergence of a diverging length scale as ξ∼|h-h_c|^-ν, with ν known as the critical exponent. To extract the parameters α and ν in finite-size systems one needs to establish finite-size scaling analysis. In this technical method, the QFI is rescaled as ℱ_Q=L^α/νg(L^1/ν(h-h_c)), where, g(·) is an arbitrary function. Plotting the rescaled QFI, namely L^-α/νℱ_Q, versus L^1/ν(h-h_c) collapses all the curves of different probe sizes and the best data collapse can be obtained for accurate selection of critical properties, i.e., (h_c, α, ν). Figs. <ref>(a) and (b) illustrate the best-achieved data collapse for probes of size L=20,⋯,30 for selected η=0, and η=1, respectively. The critical properties for both panels, obtained using PYTHON package PYFSSA <cit.>, are (h_c, α, ν) =(1.04× 10^-5, 4.00, 1.01), and (h_c, α, ν) =(0.70× 10^-5, 4.94, 1.39). For the sake of completeness, in Table <ref> we report the exponents α and ν for different values of η. Since in the finite-size systems, the peaks of the QFI at h_max are cutoff by the system size, one has ℱ_Q∝L^β. The two expected behaviors of the QFI, namely ℱ_Q∝|h-h_c|^-α in the thermodynamic limit and ℱ_Q(h_max)∝L^β for finite systems at the transition point, suggest a unified ansatz for the QFI as ℱ_Q∝1L^-β + A|h-h_max|^-α, where A is a constant. One can indeed retrieve the two behaviors from the above ansatz by either choosing L→∞ or h=h_max. Note that, the two ansatzes of Eqs. (<ref>) and (<ref>) describe the same quantity and thus have to match with each other. A simple factorization of L^-β from the denominator of Eq. (<ref>) shows that the two ansatzes are the same provided that the exponents satisfy β = α/ν. The validity of the above equation for all the considered η's is evidenced in the presented data in Table <ref> in which α/ν, obtained from finite-size scaling analysis of Eq. (<ref>), matches closely with β, obtained from scaling analysis in Fig. <ref>(a). §.§ Resource analysis Up to now, we showed that quantum criticality can indeed offer significant advantages for quantum sensing. Nevertheless, this advantage is usually hindered by the time required to prepare the ground state close to the critical points. Initializing a probe in its ground state via, for instance, adiabatic evolution <cit.>, demands a time that scales with the probe size as t∝L^z <cit.>, in which the exponent z is known as dynamical exponent and determines the rate of the energy gap closing, namely Δ E∝L^-z, for a system approaching to its criticality. Taking initialization time into consideration offers the normalized QFI, i.e., ℱ_Q/t as a new figure of merit <cit.>. Since ℱ_Q(h_max)∝ L^β one can easily show that the normalized QFI scales as ℱ_Q/t∝ L^β-z. In order to estimate the dynamical exponent z, one has to numerically compute the energy gap Δ E versus the system size L. In Fig. <ref>(a), we plot energy gap Δ E obtained through exact diagonalization as a function of L for a fully connected probe (η=0) in the extended phase (i.e., 0.0001⩽h⩽0.1), at the transition point (i.e., h=h_max) and in the localized phase (i.e., h/J=1). An algebraic decay as a function of L for energy gap is observed in the extended phase, with z=0.91, at the transition point, with z=1.04, and in the localized phase, with z=0. In Fig. <ref>(b), we plot the dynamical exponent z as a function of η for a probe in the extended phase (h/J=10^-4) and at the transition point (h=h_max). As the results show, the exponent z qualitatively behaves similarly to the exponent β as the interaction range η varies. It is worth emphasizing that even by considering time into the resource analysis, the exponent β-z remains larger than 2 in all interaction ranges. This super-Heisenberg scaling can indeed provide a significant advantage for weak-field sensing. § FILLING FACTOR ANALYSIS Having described the many-body Stark probe in a half-filling sector of the Hilbert space, we now focus on the effect of the filling factor n on the performance of our sensor. In Figs. <ref>(a) and (b) we plot the QFI at the transition point h=h_max as a function of η for filling factors n=1/4 and n=1/8, respectively. Clearly, analogs to the scenario of n=1/2 (see Fig. <ref>(a)) as η decreases (the interaction becomes more long-range) the QFI goes down and then revives as the effective localization impact disappears. Interestingly, for larger filling factors (e.g. n=1/2 and somehow n=1/4), a fully connected probe with η=0 outperforms the other choices of η. As the filling factor reduces, the best performance belongs to the nearest-neighbor probe with η→∞. In addition, our results evidence that decreasing n can remarkably boost the achievable QFI. This can be observed in Fig. <ref>(c) which represents ℱ_Q(h_max) in a probe of size L=32 prepared in various sectors of n=1/2, 1/4 and 1/8. These results are in line with our previous results in which the highest advance was obtained for a Stark probe with single excitation <cit.>. To characterize the impact of the filling factor on the scaling of the QFI with respect to L, similar to the scenario of the n=1/2, we fit the obtained QFI for different probe size L with function ℱ_Q∝L^β(h,η). The best fits result in reported β's as a function of η in Figs. <ref>(a) and (b) for n=1/4 and n=1/8, respectively. In each panel, we report the obtained β at the transition point (h=h_max) as well as in the extended phase (h/J=10^-4). As the Figs. <ref>(a) and (b) show, the exponent β shows qualitatively similar behavior to the half-filling case as the interaction becomes more long-ranged. Importantly, for all interaction ranges the exponent β shows super-Heisenberg scaling, and the best performance is always obtained for a nearest-neighbor probe. By decreasing the filling factor n, the performance of the probe in the extended phase gets closer to the one at the transition point. This is in full agreement with our previous results obtained for the Stark probe with single particle <cit.> in which for the nearest-neighbor probe both cases yield the same β. § CONCLUSION Stark localization transition in many-body systems, as a result of applying a gradient field in the lattice, has been harnessed to generate an ultra-precise sensor for measuring weak gradient fields. In this paper, we addressed the effect of long-range interactions on the capability of these probes. Our study showed that strong super-Heisenberg precision of the Stark probe can be obtained in all ranges of interaction in the extended phase until the transition point. However, as the interaction becomes more long-range two different behaviors can be observed. Initially, by making the system more long-ranged the sensing power, quantified by QFI and its exponent β, decreases. Then, around η∼ 0.1, where the system becomes effectively a fully connected graph, the sensitivity enhances again which can be seen in the rise of both QFI and β. These different trends can be explained through long-range interaction induced localization. In long-range interacting systems, keeping the filling factor fixed, every given spin configuration induces a different Zeeman energy splitting at each site. This energy splitting behaves like an effective random disorder that enhances localization and decreases the sensing power. When the interaction becomes almost fully connected, the energy splitting of all spin configurations becomes equal and effective localization disappears, which boosts the sensitivity of the probe. Interestingly, even by incorporating state preparation time in our resource analysis, the super-Heisenberg scaling still remains valid. In the localized phase, the system becomes size-independent and QFI follows a universal function. Several critical exponents governing the localization transition as well as their relationship have been extracted through extensive finite-size scaling analysis. Finally, we have shown that the sensitivity decreases by increasing the filling factor. § ACKNOWLEDGMENT A.B. acknowledges support from the National Key R&D Program of China (Grant No. 2018YFA0306703), the National Science Foundation of China (Grants No. 12050410253, No. 92065115, and No. 12274059), and the Ministry of Science and Technology of China (Grant No. QNJ2021167001L). R.Y. thanks the National Science Foundation of China for the International Young Scientists Fund (Grant No. 12250410242). 118 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL [Cacciapuoti and Salomon(2009)]cacciapuoti2009space authorL. Cacciapuoti and authorC. Salomon, journalEur. Phys. J.: Spec. Top. volume172, pages57 (year2009). [Ludlow et al.(2015)Ludlow, Boyd, Ye, Peik, and Schmidt]ludlow2015optical authorA. D. Ludlow, authorM. M. Boyd, authorJ. Ye, authorE. Peik, and authorP. O. Schmidt, journalRev. Mod. Phys. volume87, pages637 (year2015). [Dolde et al.(2011)Dolde, Fedder, Doherty, Nöbauer, Rempp, Balasubramanian, Wolf, Reinhard, Hollenberg, Jelezko et al.]dolde2011electric authorF. Dolde, authorH. Fedder, authorM. W. Doherty, authorT. Nöbauer, authorF. Rempp, authorG. Balasubramanian, authorT. Wolf, authorF. Reinhard, authorL. C. Hollenberg, authorF. Jelezko, et al., journalNat. Phys. volume7, pages459 (year2011). [Facon et al.(2016)Facon, Dietsche, Grosso, Haroche, Raimond, Brune, and Gleyzes]facon2016sensitive authorA. Facon, authorE.-K. Dietsche, authorD. Grosso, authorS. Haroche, authorJ.-M. Raimond, authorM. Brune, and authorS. Gleyzes, journalNature volume535, pages262 (year2016). [Budker and Romalis(2007)]budker2007optical authorD. Budker and authorM. Romalis, journalNat. Phys. volume3, pages227 (year2007). [Taylor et al.(2008)Taylor, Cappellaro, Childress, Jiang, Budker, Hemmer, Yacoby, Walsworth, and Lukin]taylor2008high authorJ. M. Taylor, authorP. Cappellaro, authorL. Childress, authorL. Jiang, authorD. Budker, authorP. Hemmer, authorA. Yacoby, authorR. Walsworth, and authorM. Lukin, journalNat. Phys. volume4, pages810 (year2008). [Tanaka et al.(2015)Tanaka, Knott, Matsuzaki, Dooley, Yamaguchi, Munro, and Saito]tanaka2015proposed authorT. Tanaka, authorP. Knott, authorY. Matsuzaki, authorS. Dooley, authorH. Yamaguchi, authorW. J. Munro, and authorS. Saito, journalPhys. Rev. Lett. volume115, pages170801 (year2015). [Tino et al.(2019)Tino, Bassi, Bianco, Bongs, Bouyer, Cacciapuoti, Capozziello, Chen, Chiofalo, Derevianko et al.]tino2019sage authorG. M. Tino, authorA. Bassi, authorG. Bianco, authorK. Bongs, authorP. Bouyer, authorL. Cacciapuoti, authorS. Capozziello, authorX. Chen, authorM. L. Chiofalo, authorA. Derevianko, et al., journalEur. Phys. J. D volume73, pages1 (year2019). [Aasi et al.(2013)Aasi, Abadie, Abbott, Abbott, Abbott, Abernathy, Adams, Adams, Addesso, Adhikari et al.]aasi2013enhanced authorJ. Aasi, authorJ. Abadie, authorB. Abbott, authorR. Abbott, authorT. Abbott, authorM. Abernathy, authorC. Adams, authorT. Adams, authorP. Addesso, authorR. Adhikari, et al., journalNat. Photon. volume7, pages613 (year2013). [Dailey et al.(2021)Dailey, Bradley, Jackson Kimball, Sulai, Pustelny, Wickenbrock, and Derevianko]Cosmology1 authorC. Dailey, authorC. Bradley, authorD. F. Jackson Kimball, authorI. A. Sulai, authorS. Pustelny, authorA. Wickenbrock, and authorA. Derevianko, journalNat. Astron. volume5, pages150 (year2021). [Tsai et al.(2023)Tsai, Eby, and Safronova]Cosmology2 authorY.-D. Tsai, authorJ. Eby, and authorM. S. Safronova, journalNat. Astron. volume7, pages113 (year2023). [Xiong et al.(2021)Xiong, Wu, Leng, Li, Duan, Kong, Huang, Li, Gao, Rong et al.]xiong2021searching authorF. Xiong, authorT. Wu, authorY. Leng, authorR. Li, authorC.-K. Duan, authorX. Kong, authorP. Huang, authorZ. Li, authorY. Gao, authorX. Rong, et al., journalPhys. Rev. Research volume3, pages013205 (year2021). [Aslam et al.(2023)Aslam, Zhou, Urbach, Turner, Walsworth, Lukin, and Park]Biology1 authorN. Aslam, authorH. Zhou, authorE. K. Urbach, authorM. J. Turner, authorR. L. Walsworth, authorM. D. Lukin, and authorH. Park, journalNat. Rev. Phys. volume5, pages157 (year2023). [Schirhagl et al.(2014)Schirhagl, Chang, Loretz, and Degen]Biology2 authorR. Schirhagl, authorK. Chang, authorM. Loretz, and authorC. L. Degen, journalAnnu. Rev. Phys. Chem. volume65, pages83 (year2014). [Shi et al.(2018)Shi, Kong, Zhao, Zhang, Chen, Chen, Zhang, Wang, Ye, Wang et al.]shi2018single authorF. Shi, authorF. Kong, authorP. Zhao, authorX. Zhang, authorM. Chen, authorS. Chen, authorQ. Zhang, authorM. Wang, authorX. Ye, authorZ. Wang, et al., journalNat. Methods volume15, pages697 (year2018). [Paris(2009)]paris2009quantum authorM. G. Paris, journalInt. J. Quantum Inf. volume7, pages125 (year2009). [Degen et al.(2017)Degen, Reinhard, and Cappellaro]degen2017quantum authorC. L. Degen, authorF. Reinhard, and authorP. Cappellaro, journalRev. Mod. Phys. volume89, pages035002 (year2017). [Greenberger et al.(1989)Greenberger, Horne, and Zeilinger]greenberger1989going authorD. M. Greenberger, authorM. A. Horne, and authorA. Zeilinger, in booktitleBell’s theorem, quantum theory and conceptions of the universe (publisherSpringer, year1989), pp. pages69–72. [Giovannetti et al.(2004)Giovannetti, Lloyd, and Maccone]giovannetti2004quantum authorV. Giovannetti, authorS. Lloyd, and authorL. Maccone, journalScience volume306, pages1330 (year2004). [Leibfried et al.(2004)Leibfried, Barrett, Schaetz, Britton, Chiaverini, Itano, Jost, Langer, and Wineland]leibfried2004toward authorD. Leibfried, authorM. D. Barrett, authorT. Schaetz, authorJ. Britton, authorJ. Chiaverini, authorW. M. Itano, authorJ. D. Jost, authorC. Langer, and authorD. J. Wineland, journalScience volume304, pages1476 (year2004). [Boixo et al.(2007)Boixo, Flammia, Caves, and Geremia]boixo2007generalized authorS. Boixo, authorS. T. Flammia, authorC. M. Caves, and authorJ. M. Geremia, journalPhys. Rev. Lett. volume98, pages090401 (year2007). [Giovannetti et al.(2006)Giovannetti, Lloyd, and Maccone]giovannetti2006quantum authorV. Giovannetti, authorS. Lloyd, and authorL. Maccone, journalPhys. Rev. Lett. volume96, pages010401 (year2006). [Banaszek et al.(2009)Banaszek, Demkowicz-Dobrzański, and Walmsley]banaszek2009quantum authorK. Banaszek, authorR. Demkowicz-Dobrzański, and authorI. A. Walmsley, journalNat. Photonics volume3, pages673 (year2009). [Giovannetti et al.(2011)Giovannetti, Lloyd, and Maccone]giovannetti2011advances authorV. Giovannetti, authorS. Lloyd, and authorL. Maccone, journalNat. photonics volume5, pages222 (year2011). [Fröwis and Dür(2011)]frowis2011stable authorF. Fröwis and authorW. Dür, journalPhys. Rev. Lett. volume106, pages110402 (year2011). [Wang et al.(2018)Wang, Wang, Zhan, Bian, Li, Sanders, and Xue]wang2018entanglement authorK. Wang, authorX. Wang, authorX. Zhan, authorZ. Bian, authorJ. Li, authorB. C. Sanders, and authorP. Xue, journalPhys. Rev. A volume97, pages042112 (year2018). [Kwon et al.(2019)Kwon, Tan, Volkoff, and Jeong]kwon2019nonclassicality authorH. Kwon, authorK. C. Tan, authorT. Volkoff, and authorH. Jeong, journalPhys. Rev. Lett. volume122, pages040503 (year2019). [Demkowicz-Dobrzański et al.(2012)Demkowicz-Dobrzański, Kołodyński, and Guţă]demkowicz2012elusive authorR. Demkowicz-Dobrzański, authorJ. Kołodyński, and authorM. Guţă, journalNat. Commun. volume3, pages1063 (year2012). [Albarelli et al.(2018)Albarelli, Rossi, Tamascelli, and Genoni]albarelli2018restoring authorF. Albarelli, authorM. A. Rossi, authorD. Tamascelli, and authorM. G. Genoni, journalQuantum volume2, pages110 (year2018). [Nagata et al.(2007)Nagata, Okamoto, O'Brien, Sasaki, and Takeuchi]GHZexp1 authorT. Nagata, authorR. Okamoto, authorJ. L. O'Brien, authorK. Sasaki, and authorS. Takeuchi, journalScience volume316, pages726 (year2007). [Benjamin K. Malia(2022)]GHZexp2 authorJ. M.-R. . M. A. K. Benjamin K. Malia, Yunfan Wu, journalNature volume612, pages661–665 (year2022). [Marciniak et al.(2022)Marciniak, Feldker, Pogorelov, Kaubruegger, Vasilyev, van Bijnen, Schindler, Zoller, Blatt, and Monz]GHZexp3 authorC. D. Marciniak, authorT. Feldker, authorI. Pogorelov, authorR. Kaubruegger, authorD. V. Vasilyev, authorR. van Bijnen, authorP. Schindler, authorP. Zoller, authorR. Blatt, and authorT. Monz, journalNature volume603, pages604 (year2022). [De Pasquale et al.(2013)De Pasquale, Rossini, Facchi, and Giovannetti]de2013quantum authorA. De Pasquale, authorD. Rossini, authorP. Facchi, and authorV. Giovannetti, journalPhys. Rev. A volume88, pages052117 (year2013). [Pang and Brun(2014)]PhysRevA.90.022117 authorS. Pang and authorT. A. Brun, journalPhys. Rev. A volume90, pages022117 (year2014). [Skotiniotis et al.(2015)Skotiniotis, Sekatski, and Dür]skotiniotis2015quantum authorM. Skotiniotis, authorP. Sekatski, and authorW. Dür, journalNew J. Phys. volume17, pages073032 (year2015). [Raghunandan et al.(2018)Raghunandan, Wrachtrup, and Weimer]raghunandan2018high authorM. Raghunandan, authorJ. Wrachtrup, and authorH. Weimer, journalPhys. Rev. Lett. volume120, pages150501 (year2018). [Heugel et al.(2019)Heugel, Biondi, Zilberberg, and Chitra]heugel2019quantum authorT. L. Heugel, authorM. Biondi, authorO. Zilberberg, and authorR. Chitra, journalPhys. Rev. Lett. volume123, pages173601 (year2019). [Yang and Jacob(2019)]yang2019engineering authorL.-P. Yang and authorZ. Jacob, journalJ. Appl. Phys. volume126 (year2019). [Ding et al.(2022)Ding, Liu, Shi, Guo, Mølmer, and Adams]ding2022enhanced authorD.-S. Ding, authorZ.-K. Liu, authorB.-S. Shi, authorG.-C. Guo, authorK. Mølmer, and authorC. S. Adams, journalNat. Phys. volume18, pages1447 (year2022). [Zanardi and Paunković(2006)]zanardi2006ground authorP. Zanardi and authorN. Paunković, journalPhys. Rev. E volume74, pages031123 (year2006). [Zanardi et al.(2007)Zanardi, Quan, Wang, and Sun]zanardi2007mixed authorP. Zanardi, authorH. Quan, authorX. Wang, and authorC. Sun, journalPhys. Rev. A volume75, pages032109 (year2007). [Gu et al.(2008)Gu, Kwok, Ning, Lin et al.]gu2008fidelity authorS.-J. Gu, authorH.-M. Kwok, authorW.-Q. Ning, authorH.-Q. Lin, et al., journalPhys. Rev. B volume77, pages245109 (year2008). [Zanardi et al.(2008)Zanardi, Paris, and Venuti]zanardi2008quantum authorP. Zanardi, authorM. G. Paris, and authorL. C. Venuti, journalPhys. Rev. A volume78, pages042105 (year2008). [Invernizzi et al.(2008)Invernizzi, Korbman, Venuti, and Paris]invernizzi2008optimal authorC. Invernizzi, authorM. Korbman, authorL. C. Venuti, and authorM. G. Paris, journalPhys. Rev. A volume78, pages042106 (year2008). [Gu(2010)]gu2010fidelity authorS.-J. Gu, journalInt. J. Mod. Phys. B volume24, pages4371 (year2010). [Gammelmark and Mølmer(2011)]gammelmark2011phase authorS. Gammelmark and authorK. Mølmer, journalNew J. Phys. volume13, pages053035 (year2011). [Rams et al.(2018)Rams, Sierant, Dutta, Horodecki, and Zakrzewski]rams2018limits authorM. M. Rams, authorP. Sierant, authorO. Dutta, authorP. Horodecki, and authorJ. Zakrzewski, journalPhys. Rev. X volume8, pages021022 (year2018). [Wei(2019)]wei2019fidelity authorB.-B. Wei, journalPhys. Rev. A volume99, pages042117 (year2019). [Chu et al.(2021)Chu, Zhang, Yu, and Cai]chu2021dynamic authorY. Chu, authorS. Zhang, authorB. Yu, and authorJ. Cai, journalPhys. Rev. Lett. volume126, pages010502 (year2021). [Liu et al.(2021)Liu, Chen, Jiang, Yang, Wu, Li, Yuan, Peng, and Du]liu2021experimental authorR. Liu, authorY. Chen, authorM. Jiang, authorX. Yang, authorZ. Wu, authorY. Li, authorH. Yuan, authorX. Peng, and authorJ. Du, journalnpj Quantum Inf. volume7, pages170 (year2021). [Montenegro et al.(2021)Montenegro, Mishra, and Bayat]montenegro2021global authorV. Montenegro, authorU. Mishra, and authorA. Bayat, journalPhys. Rev. Lett. volume126, pages200501 (year2021). [Mirkhalaf et al.(2021)Mirkhalaf, Orenes, Mitchell, and Witkowska]mirkhalaf2021criticality authorS. S. Mirkhalaf, authorD. B. Orenes, authorM. W. Mitchell, and authorE. Witkowska, journalPhys. Rev. A volume103, pages023317 (year2021). [Di Candia et al.(2023)Di Candia, Minganti, Petrovnin, Paraoanu, and Felicetti]di2023critical authorR. Di Candia, authorF. Minganti, authorK. Petrovnin, authorG. Paraoanu, and authorS. Felicetti, journalnpj Quantum Inf. volume9, pages23 (year2023). [Mishra and Bayat(2021)]mishra2021driving authorU. Mishra and authorA. Bayat, journalPhys. Rev. Lett. volume127, pages080504 (year2021). [Mishra and Bayat(2022)]mishra2022integrable authorU. Mishra and authorA. Bayat, journalSci. Rep. volume12, pages14760 (year2022). [Baumann et al.(2010)Baumann, Guerlin, Brennecke, and Esslinger]baumann2010dicke authorK. Baumann, authorC. Guerlin, authorF. Brennecke, and authorT. Esslinger, journalNature volume464, pages1301 (year2010). [Baden et al.(2014)Baden, Arnold, Grimsmo, Parkins, and Barrett]baden2014realization authorM. P. Baden, authorK. J. Arnold, authorA. L. Grimsmo, authorS. Parkins, and authorM. D. Barrett, journalPhys. Rev. Lett. volume113, pages020408 (year2014). [Klinder et al.(2015)Klinder, Keßler, Wolke, Mathey, and Hemmerich]klinder2015dynamical authorJ. Klinder, authorH. Keßler, authorM. Wolke, authorL. Mathey, and authorA. Hemmerich, journalProc. Natl. Acad. Sci. U.S.A. volume112, pages3290 (year2015). [Rodriguez et al.(2017)Rodriguez, Casteels, Storme, Zambon, Sagnes, Le Gratiet, Galopin, Lemaître, Amo, Ciuti et al.]rodriguez2017probing authorS. Rodriguez, authorW. Casteels, authorF. Storme, authorN. C. Zambon, authorI. Sagnes, authorL. Le Gratiet, authorE. Galopin, authorA. Lemaître, authorA. Amo, authorC. Ciuti, et al., journalPhys. Rev. Lett. volume118, pages247402 (year2017). [Fitzpatrick et al.(2017)Fitzpatrick, Sundaresan, Li, Koch, and Houck]fitzpatrick2017observation authorM. Fitzpatrick, authorN. M. Sundaresan, authorA. C. Li, authorJ. Koch, and authorA. A. Houck, journalPhys. Rev. X volume7, pages011016 (year2017). [Fink et al.(2017)Fink, Dombi, Vukics, Wallraff, and Domokos]fink2017observation authorJ. M. Fink, authorA. Dombi, authorA. Vukics, authorA. Wallraff, and authorP. Domokos, journalPhys. Rev. X volume7, pages011012 (year2017). [Ilias et al.(2022)Ilias, Yang, Huelga, and Plenio]ilias2022criticality authorT. Ilias, authorD. Yang, authorS. F. Huelga, and authorM. B. Plenio, journalPRX Quantum volume3, pages010354 (year2022). [Montenegro et al.(2023)Montenegro, Genoni, Bayat, and Paris]montenegro2023quantum authorV. Montenegro, authorM. Genoni, authorA. Bayat, and authorM. Paris, journalarXiv:2301.02103 (year2023). [Iemini et al.(2023)Iemini, Fazio, and Sanpera]iemini2023floquet authorF. Iemini, authorR. Fazio, and authorA. Sanpera, journalarXiv:2306.03927 (year2023). [Budich and Bergholtz(2020)]budich2020non authorJ. C. Budich and authorE. J. Bergholtz, journalPhys. Rev. Lett. volume125, pages180403 (year2020). [Sarkar et al.(2022)Sarkar, Mukhopadhyay, Alase, and Bayat]sarkar2022free authorS. Sarkar, authorC. Mukhopadhyay, authorA. Alase, and authorA. Bayat, journalPhys. Rev. Lett. volume129, pages090503 (year2022). [Koch and Budich(2022)]koch2022quantum authorF. Koch and authorJ. C. Budich, journalPhys. Rev. Research volume4, pages013113 (year2022). [Yu et al.(2022)Yu, Li, Chu, Mera, Ünal, Yang, Liu, Goldman, and Cai]yu2022experimental authorM. Yu, authorX. Li, authorY. Chu, authorB. Mera, authorF. N. Ünal, authorP. Yang, authorY. Liu, authorN. Goldman, and authorJ. Cai, journalarXiv:2206.00546 (year2022). [Sahoo et al.(2023)Sahoo, Mishra, and Rakshit]sahoo2023localization authorA. Sahoo, authorU. Mishra, and authorD. Rakshit, journalarXiv:2305.02315 (year2023). [He et al.(2023)He, Yousefjani, and Bayat]he2023stark authorX. He, authorR. Yousefjani, and authorA. Bayat, journalPhys. Rev. Lett. volume131, pages010801 (year2023). [Wiseman(1995)]wiseman1995adaptive authorH. M. Wiseman, journalPhys. Rev. Lett. volume75, pages4587 (year1995). [Armen et al.(2002)Armen, Au, Stockton, Doherty, and Mabuchi]armen2002adaptive authorM. A. Armen, authorJ. K. Au, authorJ. K. Stockton, authorA. C. Doherty, and authorH. Mabuchi, journalPhys. Rev. Lett. volume89, pages133602 (year2002). [Fujiwara(2006)]fujiwara2006strong authorA. Fujiwara, journalJ. Phys. A Math. Gen. volume39, pages12489 (year2006). [Higgins et al.(2007)Higgins, Berry, Bartlett, Wiseman, and Pryde]higgins2007entanglement authorB. L. Higgins, authorD. W. Berry, authorS. D. Bartlett, authorH. M. Wiseman, and authorG. J. Pryde, journalNature volume450, pages393 (year2007). [Berry et al.(2009)Berry, Higgins, Bartlett, Mitchell, Pryde, and Wiseman]berry2009perform authorD. W. Berry, authorB. L. Higgins, authorS. D. Bartlett, authorM. W. Mitchell, authorG. J. Pryde, and authorH. M. Wiseman, journalPhys. Rev. A volume80, pages052114 (year2009). [Said et al.(2011)Said, Berry, and Twamley]said2011nanoscale authorR. Said, authorD. Berry, and authorJ. Twamley, journalPhys. Rev. B volume83, pages125410 (year2011). [Okamoto et al.(2012)Okamoto, Iefuji, Oyama, Yamagata, Imai, Fujiwara, and Takeuchi]okamoto2012experimental authorR. Okamoto, authorM. Iefuji, authorS. Oyama, authorK. Yamagata, authorH. Imai, authorA. Fujiwara, and authorS. Takeuchi, journalPhys. Rev. Lett. volume109, pages130404 (year2012). [Bonato et al.(2016)Bonato, Blok, Dinani, Berry, Markham, Twitchen, and Hanson]bonato2016optimized authorC. Bonato, authorM. S. Blok, authorH. T. Dinani, authorD. W. Berry, authorM. L. Markham, authorD. J. Twitchen, and authorR. Hanson, journalNat. Nanotechnol. volume11, pages247 (year2016). [Okamoto et al.(2017)Okamoto, Oyama, Yamagata, Fujiwara, and Takeuchi]okamoto2017experimental authorR. Okamoto, authorS. Oyama, authorK. Yamagata, authorA. Fujiwara, and authorS. Takeuchi, journalPhys. Rev. A volume96, pages022124 (year2017). [Fernández-Lorenzo and Porras(2017)]fernandez2017quantum authorS. Fernández-Lorenzo and authorD. Porras, journalPhys. Rev. A volume96, pages013817 (year2017). [Albarelli et al.(2017)Albarelli, Rossi, Paris, and Genoni]albarelli2017ultimate authorF. Albarelli, authorM. A. Rossi, authorM. G. Paris, and authorM. G. Genoni, journalNew J. Phys. volume19, pages123011 (year2017). [Gammelmark and Mølmer(2014)]gammelmark2014fisher authorS. Gammelmark and authorK. Mølmer, journalPhys. Rev. Lett. volume112, pages170401 (year2014). [Rossi et al.(2020)Rossi, Albarelli, Tamascelli, and Genoni]rossi2020noisy authorM. A. Rossi, authorF. Albarelli, authorD. Tamascelli, and authorM. G. Genoni, journalPhys. Rev. Lett. volume125, pages200505 (year2020). [Yang et al.(2022a)Yang, Huelga, and Plenio]yang2022efficient authorD. Yang, authorS. F. Huelga, and authorM. B. Plenio, journalarXiv:2209.08777 (year2022a). [Burgarth et al.(2015)Burgarth, Giovannetti, Kato, and Yuasa]burgarth2015quantum authorD. Burgarth, authorV. Giovannetti, authorA. N. Kato, and authorK. Yuasa, journalNew J. Phys. volume17, pages113055 (year2015). [Montenegro et al.(2022)Montenegro, Jones, Bose, and Bayat]montenegro2022sequential authorV. Montenegro, authorG. S. Jones, authorS. Bose, and authorA. Bayat, journalPhys. Rev. Lett. volume129, pages120503 (year2022). [Morong et al.(2021)Morong, Liu, Becker, Collins, Feng, Kyprianidis, Pagano, You, Gorshkov, and Monroe]morong2021observation authorW. Morong, authorF. Liu, authorP. Becker, authorK. Collins, authorL. Feng, authorA. Kyprianidis, authorG. Pagano, authorT. You, authorA. Gorshkov, and authorC. Monroe, journalNature volume599, pages393 (year2021). [Smith et al.(2016)Smith, Lee, Richerme, Neyenhuis, Hess, Hauke, Heyl, Huse, and Monroe]smith2016many authorJ. Smith, authorA. Lee, authorP. Richerme, authorB. Neyenhuis, authorP. W. Hess, authorP. Hauke, authorM. Heyl, authorD. A. Huse, and authorC. Monroe, journalNat. Phys. volume12, pages907 (year2016). [Rajabi et al.(2019)Rajabi, Motlakunta, Shih, Kotibhaskar, Quraishi, Ajoy, and Islam]rajabi2019dynamical authorF. Rajabi, authorS. Motlakunta, authorC.-Y. Shih, authorN. Kotibhaskar, authorQ. Quraishi, authorA. Ajoy, and authorR. Islam, journalnpj Quantum Inf. volume5, pages32 (year2019). [Choi et al.(2016)Choi, Hild, Zeiher, Schauß, Rubio-Abadal, Yefsah, Khemani, Huse, Bloch, and Gross]choi2016exploring authorJ.-y. Choi, authorS. Hild, authorJ. Zeiher, authorP. Schauß, authorA. Rubio-Abadal, authorT. Yefsah, authorV. Khemani, authorD. A. Huse, authorI. Bloch, and authorC. Gross, journalScience volume352, pages1547 (year2016). [Rispoli et al.(2019)Rispoli, Lukin, Schittko, Kim, Tai, Léonard, and Greiner]rispoli2019quantum authorM. Rispoli, authorA. Lukin, authorR. Schittko, authorS. Kim, authorM. E. Tai, authorJ. Léonard, and authorM. Greiner, journalNature volume573, pages385 (year2019). [Garbe et al.(2022)Garbe, Abah, Felicetti, and Puebla]garbe2022critical authorL. Garbe, authorO. Abah, authorS. Felicetti, and authorR. Puebla, journalQuantum Sci. Technol. volume7, pages035010 (year2022). [Yang et al.(2022b)Yang, Pang, del Campo, and Jordan]Kitaev authorJ. Yang, authorS. Pang, authorA. del Campo, and authorA. N. Jordan, journalPhys. Rev. Research volume4, pages013133 (year2022b). [Waddington et al.(2020)Waddington, Boele, Maschmeyer, Kuncic, and Rosen]waddington2020high authorD. E. Waddington, authorT. Boele, authorR. Maschmeyer, authorZ. Kuncic, and authorM. S. Rosen, journalSci. Adv. volume6, pageseabb0998 (year2020). [Koonjoo et al.(2021)Koonjoo, Zhu, Bagnall, Bhutto, and Rosen]koonjoo2021boosting authorN. Koonjoo, authorB. Zhu, authorG. C. Bagnall, authorD. Bhutto, and authorM. Rosen, journalSci. Rep. volume11, pages8248 (year2021). [Snadden et al.(1998)Snadden, McGuirk, Bouyer, Haritos, and Kasevich]snadden1998measurement authorM. Snadden, authorJ. McGuirk, authorP. Bouyer, authorK. Haritos, and authorM. Kasevich, journalPhys. Rev. Lett. volume81, pages971 (year1998). [Griggs et al.(2017)Griggs, Moody, Norton, Paik, and Venkateswara]griggs2017sensitive authorC. Griggs, authorM. Moody, authorR. Norton, authorH. Paik, and authorK. Venkateswara, journalPhys. Rev. Appl. volume8, pages064024 (year2017). [Stray et al.(2022)Stray, Lamb, Kaushik, Vovrosh, Rodgers, Winch, Hayati, Boddice, Stabrawa, Niggebaum et al.]stray2022quantum authorB. Stray, authorA. Lamb, authorA. Kaushik, authorJ. Vovrosh, authorA. Rodgers, authorJ. Winch, authorF. Hayati, authorD. Boddice, authorA. Stabrawa, authorA. Niggebaum, et al., journalNature volume602, pages590 (year2022). [Phillips et al.(2022)Phillips, Wright, Riou, Maddox, Maskell, and Ralph]phillips2022position authorA. M. Phillips, authorM. J. Wright, authorI. Riou, authorS. Maddox, authorS. Maskell, and authorJ. F. Ralph, journalAVS Quantum Sci. volume4 (year2022). [Goda et al.(2008)Goda, Miyakawa, Mikhailov, Saraf, Adhikari, McKenzie, Ward, Vass, Weinstein, and Mavalvala]GravitionalWave1 authorK. Goda, authorO. Miyakawa, authorE. E. Mikhailov, authorS. Saraf, authorR. Adhikari, authorK. McKenzie, authorR. Ward, authorS. Vass, authorA. J. Weinstein, and authorN. Mavalvala, journalNature Physics volume4, pages472 (year2008). [Dimopoulos et al.(2009)Dimopoulos, Graham, Hogan, Kasevich, and Rajendran]GravitionalWave2 authorS. Dimopoulos, authorP. W. Graham, authorJ. M. Hogan, authorM. A. Kasevich, and authorS. Rajendran, journalPhys. Lett. B volume678, pages37 (year2009). [Asenbaum et al.(2020)Asenbaum, Overstreet, Kim, Curti, and Kasevich]asenbaum2020atom authorP. Asenbaum, authorC. Overstreet, authorM. Kim, authorJ. Curti, and authorM. A. Kasevich, journalPhys. Rev. Lett. volume125, pages191101 (year2020). [Parker et al.(2018)Parker, Yu, Zhong, Estey, and Müller]parker2018measurement authorR. H. Parker, authorC. Yu, authorW. Zhong, authorB. Estey, and authorH. Müller, journalScience volume360, pages191 (year2018). [Rosi et al.(2014)Rosi, Sorrentino, Cacciapuoti, Prevedelli, and Tino]rosi2014precision authorG. Rosi, authorF. Sorrentino, authorL. Cacciapuoti, authorM. Prevedelli, and authorG. Tino, journalNature volume510, pages518 (year2014). [Meyer(2021)]meyer2021fisher authorJ. J. Meyer, journalQuantum volume5, pages539 (year2021). [Campos Venuti and Zanardi(2007)]FidelitySuscep1 authorL. Campos Venuti and authorP. Zanardi, journalPhys. Rev. Lett. volume99, pages095701 (year2007). [Schwandt et al.(2009)Schwandt, Alet, and Capponi]FidelitySuscep2 authorD. Schwandt, authorF. Alet, and authorS. Capponi, journalPhys. Rev. Lett. volume103, pages170501 (year2009). [Albuquerque et al.(2010)Albuquerque, Alet, Sire, and Capponi]FidelitySuscep3 authorA. F. Albuquerque, authorF. Alet, authorC. Sire, and authorS. Capponi, journalPhys. Rev. B volume81, pages064418 (year2010). [You et al.(2007)You, Li, and Gu]FidelitySuscep4 authorW.-L. You, authorY.-W. Li, and authorS.-J. Gu, journalPhys. Rev. E volume76, pages022101 (year2007). [Kolovsky(2008)]kolovsky2008interplay authorA. R. Kolovsky, journalPhys. Rev. Lett. volume101, pages190602 (year2008). [van Nieuwenburg et al.(2019)van Nieuwenburg, Baum, and Refael]van2019bloch authorE. van Nieuwenburg, authorY. Baum, and authorG. Refael, journalProc. Natl. Acad. Sci. U.S.A. volume116, pages9269 (year2019). [Schulz et al.(2019)Schulz, Hooley, Moessner, and Pollmann]schulz2019stark authorM. Schulz, authorC. Hooley, authorR. Moessner, and authorF. Pollmann, journalPhys. Rev. Lett. volume122, pages040606 (year2019). [Yao and Zakrzewski(2020)]yao2020many authorR. Yao and authorJ. Zakrzewski, journalPhys. Rev. B volume102, pages104203 (year2020). [Chanda et al.(2020)Chanda, Yao, and Zakrzewski]chanda2020coexistence authorT. Chanda, authorR. Yao, and authorJ. Zakrzewski, journalPhys. Rev. Research volume2, pages032039 (year2020). [Luitz et al.(2015)Luitz, Laflorencie, and Alet]luitz2015many authorD. J. Luitz, authorN. Laflorencie, and authorF. Alet, journalPhys. Rev. B volume91, pages081103 (year2015). [Hauschild and Pollmann(2018)]tenpy authorJ. Hauschild and authorF. Pollmann, journalSciPost Phys. Lect. Notes p. pages5 (year2018). [Melchert(2009)]melchert2009autoscalepy authorO. Melchert, journalarXiv:0910.5403 (year2009). [Sorge(2015)]andreas_sorge_2015_35293 authorA. Sorge, titlepyfssa 0.7.6 (year2015), <10.5281/zenodo.35293>.
http://arxiv.org/abs/2307.04307v1
20230710020825
Weyl semimetallic state in the Rashba-Hubbard model
[ "Katsunori Kubo" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mes-hall" ]
Advanced Science Research Center, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195, Japan We investigate the Hubbard model with the Rashba spin-orbit coupling on a square lattice. The Rashba spin-orbit coupling generates two-dimensional Weyl points in the band dispersion. In a system with edges along [11] direction, zero-energy edge states appear, while no edge state exists for a system with edges along an axis direction. The zero-energy edge states with a certain momentum along the edges are predominantly in the up-spin state on the right edge, while they are predominantly in the down-spin state on the left edge. Thus, the zero-energy edge states are helical. By using a variational Monte Carlo method for finite Coulomb interaction cases, we find that the Weyl points can move toward the Fermi level by the correlation effects. We also investigate the magnetism of the model by the Hartree-Fock approximation and discuss weak magnetic order in the weak-coupling region. Weyl semimetallic state in the Rashba-Hubbard model Katsunori Kubo August 12, 2023 ===================================================== § INTRODUCTION In a two-dimensional system without inversion symmetry, such as in an interface of a heterostructure, a momentum-dependent spin-orbit coupling is allowed. It is called the Rashba spin-orbit coupling <cit.>. The Rashba spin-orbit coupling lifts the spin degeneracy and affects the electronic state of materials. Several interesting phenomena originating from the Rashba spin-orbit coupling have been proposed and investigated. By considering the spin precession by the Rashba spin-orbit coupling, Datta and Das proposed the spin transistor <cit.>, in which electron transport between spin-polarized contacts can be modulated by the gate voltage. After this proposal, the tunability of the Rashba spin-orbit coupling by the gate voltage has been experimentally demonstrated <cit.>. Such an effect may be used in a device in spintronics. The possibility of the intrinsic spin Hall effect, which is also important in the research field of spintronics, by the Rashba spin-orbit coupling has been discussed for a long time <cit.>. Another interesting phenomenon with the Rashba spin-orbit coupling is superconductivity. When the Rashba spin-orbit coupling is introduced in a superconducting system, even- and odd-parity superconducting states are mixed due to the breaking of the inversion symmetry <cit.>. This mixing affects the magnetic properties of the superconducting state, such as the Knight shift. While the above studies have mainly focused on the one-electron states in the presence of the Rashba spin-orbit coupling, the effects of the Coulomb interaction between electrons have also been investigated. The Hubbard model with the Rashba spin-orbit coupling on a square lattice called the Rashba-Hubbard model is one of the simplest models to investigate such effects. In this study, we investigate the ground state of this model at half-filling, i.e., electron number per site n=1, by the variational Monte Carlo method and the Hartree-Fock approximation. In the strong coupling limit, an effective localized model is derived and the possibility of long-period magnetic order is discussed <cit.>. The long-period magnetism is a consequence of the Dzyaloshinskii-Moriya interaction caused by the Rashba spin-orbit coupling. Such long-period magnetic order is also discussed by the Hartree-Fock approximation for the Rashba-Hubbard model <cit.>. However, there is a contradiction among these studies even within the Hartree-Fock approximation. In the weak-coupling region with a finite Rashba spin-orbit coupling, an antiferromagnetic order is obtained in Ref. <cit.>, but a paramagnetic phase is obtained in Refs. <cit.> and <cit.>. We will discuss this point in Sec. <ref>. The knowledge of the electron correlation beyond the Hartree-Fock approximation is limited. The electron correlation in the Rashba-Hubbard model is studied by a dynamical mean-field theory mainly focusing on magnetism <cit.> and by a cluster perturbation theory investigating the Mott transition in the paramagnetic state <cit.>. We will study the electron correlation in the paramagnetic phase by using the variational Monte Carlo method in Sec. <ref>. The results concerning the Mott transition are consistent with Ref. <cit.>. In addition, we find a transition to a Weyl semimetallic state by the electron correlation. Even without the Coulomb interaction, the band structure of this model is intriguing. When the Rashba spin-orbit coupling is finite, the upper and lower bands touch each other at Weyl points. In the large Rashba spin-orbit coupling limit, all the Weyl points locate at the Fermi level for half-filling. Topological aspects of the Weyl points and corresponding edge states of this simple model are discussed in Sec. <ref>. § MODEL The model Hamiltonian is given by H=H_kin+H_R+H_int. The kinetic energy term is given by H_kin = -t∑_(r,r') σ (c_rσ^†c_r' σ +c_r' σ^†c_rσ) =∑_kσϵ_k c_kσ^†c_kσ, where c_rσ is the annihilation operator of the electron at site r with spin σ and c_kσ is the Fourier transform of it. (r,r') denotes a pair of nearest-neighbor sites, t is the hopping integral, and the kinetic energy is ϵ_k=-2t (cos k_x + cos k_y), where the lattice constant is set as unity. The Rashba spin-orbit coupling term is given by <cit.> H_R = iλ_R ∑_rσσ' a=± 1 a (σ^x_σσ' c_rσ^†c_r+aŷσ' -σ^y_σσ' c_rσ^†c_r+ax̂σ') = -2λ_R∑_kσσ'(sin k_y σ^x_σσ'-sin k_x σ^y_σσ') c_kσ^†c_kσ' = ∑_kσσ'[h_x(k) σ^x_σσ'+h_y(k) σ^y_σσ'] c_kσ^†c_kσ' = ∑_kσσ' H_R σσ'(k) c_kσ^†c_kσ', where x̂ (ŷ) is the unit vector along the x (y) direction, σ are the Pauli matrices, λ_R is the coupling constant of the Rashba spin-orbit coupling, h_x(k)=-2λ_R sin k_y, and h_y(k)= 2λ_R sin k_x. We can assume t ≥ 0 and λ_R ≥ 0 without loss of generality. We parametrize them as t=t̃cosα and λ_R=√(2) t̃sinα. The band dispersion of H_0=H_kin+H_R is E_±(k) =-2t(cos k_x+cos k_y) ± |h(k)|, where |h(k)|=√(h_x^2(k)+h_y^2(k)) =2λ_R√(sin^2 k_x+sin^2 k_y). The bandwidth is W=8t̃. Due to the electron-hole symmetry of the model, the Fermi level is zero at half-filling. For α=0, that is, without the Rashba spin-orbit coupling, the band is doubly degenerate [Fig. <ref>(a)]. For a finite λ_R, the spin degeneracy is lifted except at the time-reversal invariant momenta X^(0)=(0,0), X^(1)=(π,0), X^(2)=(0,π), and X^(3)=(π,π) [Figs. <ref>(b) and <ref>(c)]. These are two-dimensional Weyl points. The energies at the Weyl points X^(1) and X^(2) are always zero. By increasing α to 0.5π (t=0), the energies at the other Weyl points X^(0) and X^(3) also move to zero. In Fig. <ref>(d), we show the energy dispersion in the entire Brillouin zone for α=0.5π. We can see the linear dispersions around the Weyl points. The Coulomb interaction term is given by H_int=U∑_rn_r↑n_r↓, where n_rσ=c_rσ^†c_rσ and U is the coupling constant of the Coulomb interaction. § TOPOLOGY AND EDGE STATES OF THE NON-INTERACTING HAMILTONIAN The energy bands degenerate when h(k)=0, i.e., at the Weyl points. In the vicinity of these points, we set k=X^(l)+p and obtain H_R(k) = ∑_j h_j(k)σ^j ≃∑_ij. ∂ h_j(k)/∂ k_i|_k=X^(l) p_iσ^j = ∑_ijv^(l)_ijp_iσ^j. The chirality of each Weyl point X^(l) is defined as χ_l = sgn [ v^(l)] <cit.> and we obtain χ_0=χ_3=1 and χ_1=χ_2=-1. The winding number of a normalized two-component vector field ĥ(k)=h(k)/|h(k)| is <cit.> w_l = ∮_C_ldk/2π·[ ĥ_x(k)∇ĥ_y(k) -ĥ_y(k)∇ĥ_x(k)], where C_l is a loop enclosing X_l. We obtain w_l=χ_l. Figure <ref> shows ĥ(k) around k=X^(0) and X^(1) as examples. We can recognize the winding numbers 1 and -1, respectively, from this figure. These topological numbers are related to the Berry phase <cit.>. The eigenvector of H_R(k) with eigenvalue -|h(k)| is |k⟩ =(1/√(2))(-1,ĥ_x(k)+iĥ_y(k))^T. The Berry connection is a(k) = -i⟨k | ∇ |k⟩ = 1/2[ ĥ_x(k)∇ĥ_y(k) -ĥ_y(k)∇ĥ_x(k)]. Then, the Berry phase is γ_l = ∫_C_ldk·a(k) =w_lπ. From the existence of such topological defects like the Weyl points, we expect edge states as in graphene with Dirac points <cit.>. We consider two types of edges: the edges along an axis direction [straight edges, Fig. <ref>(a)] and the edges along [11] direction [zigzag edges, Fig. <ref>(b)]. We denote the momentum along the edges as k and the momentum perpendicular to the edges as k_⊥. To discuss the existence of the edge states, the chiral symmetry and the winding number for a fixed k are important <cit.>. The Rashba term has a chiral symmetry: { H_R(k), σ^z } = H_R(k)σ^z+σ^zH_R(k)=0 and σ^zσ^z †=I with I being the unit matrix. The winding number for a fixed k is given by w(k) = ∫_0^2πdk_⊥/2π[ ĥ_x(k) ∂/∂ k_⊥ĥ_y(k) -ĥ_y(k) ∂/∂ k_⊥ĥ_x(k) ]. For the straight edges, we find w(k)=0 and we expect that the edge states are probably absent. For the zigzag edges, h_x(k)=-2λ_R sin(k-k_⊥) and h_y(k)= 2λ_R sin(k+k_⊥), where we have set 1/√(2) times the bond length as unity, and we find w(k)=-sgn[sin(2k)] except for k = 0, ±π/2, and ±π (projected Weyl points). At the projected Weyl points, w(k)=0. Thus, the edge states should exist except for the projected Weyl points at least without t. We note that the edge states can be understood as those of a one-dimensional topological insulator. The model only with the Rashba term with fixed k is a one-dimensional model. When this one-dimensional system has a gap with a non-zero topological number, the system can be regarded as a one-dimensional topological insulator and has edge states. This one-dimensional system is of symmetry class BDI and can possess a topological number of ℤ <cit.>. To explicitly demonstrate the existence of the edge states, we numerically evaluate the band energy for lattices with finite widths. We denote the number of lattice sites perpendicular to the edges as N (see Fig. <ref>) and obtain 2N bands. The obtained energy bands are shown in Fig. <ref>. For the straight edges [Figs. <ref>(a)–(c)], we do not find the edge states. It is consistent with w(k)=0. For the zigzag edges [Figs. <ref>(d)–(f)], we obtain isolated zero-energy states except for λ_R=0 [Fig. <ref>(d)]. In particular, for α=0.5π, the zero-energy states appear at all the k points except for the projected Weyl points as is expected from w(k) 0. We find that the zero-energy states remain even for finite t as shown in Fig. <ref>(e). For an even number of N, the energy of the zero-energy states shifts from zero around the projected Weyl points when N is small. For an odd number of N, we obtain zero energy even for a small N. Thus, we set N=51 in the calculations. We discuss the characteristics of the zero-energy edge states. We define c_i kσ as the Fourier transform of c_rσ along the edges, where i labels the site perpendicular to the edges (see Fig. <ref>). For the lattice with the zigzag edges, we can show that the states c_-(N-1)/2, π/4, ↓^†|0⟩ and c_(N-1)/2, π/4, ↑^†|0⟩ do not have matrix elements of H_R, where |0⟩ is the vacuum state. Thus, these states are the zero-energy states for α=0.5π completely localized on the left and right edges, respectively, with opposite spins. This helical character of the edge states is natural since the system lacks inversion symmetry due to the Rashba spin-orbit coupling. For other momenta and α, we calculate the spin density of the zero-energy edge states n_0 k σ(i)=⟨ 0 k| c_ikσ^† c_ikσ|0 k ⟩, where |0 k ⟩ denotes the zero-energy state at momentum k. The zero-energy states are doubly degenerate, and we take the average of the two states. We show n_0 k σ(i) for α=0.3π, as an example, in Fig. <ref>. At k where the bulk band gap is sufficiently large, the zero-energy states are localized well on the edges [Figs. <ref>(c) and <ref>(d)]. As the bulk band gap becomes small, the zero-energy states penetrate inner sites [Figs. <ref>(b) and <ref>(e)] and the zero-energy states extend in the entire lattice when the gap closes [Figs. <ref>(a) and <ref>(f)]. The spin components are opposite between the edges. For example, for k=0.4π and 0.45π, the up-spin state dominates on the right edge while the down-spin state dominates on the left edge. Thus, the edge states are helical. The spin components are exchanged between states at k and -k [compare Fig. <ref>(d) with Fig. <ref>(g) and Fig. <ref>(e) with Fig. <ref>(h)]. In Fig. <ref>(i), we show a schematic view of the spin density corresponding to k≃ 0.4π on the real-space lattice. § WEYL SEMIMETALLIC STATE INDUCED BY THE CORRELATION EFFECTS In this section, we investigate the effects of the Coulomb interaction U at half-filling, i.e., the electron number per site n=1, within the paramagnetic phase by applying the variational Monte Carlo method <cit.>. To achieve this objective, it is necessary to select a wave function capable of describing the Mott insulating state, as a Mott transition is anticipated, at least in the ordinary Hubbard model without the Rashba spin-orbit coupling. In this study, we employ a wave function with doublon-holon binding factors [doublon-holon binding wave function (DHWF)] <cit.>. A doublon means a doubly occupied site and a holon means an empty site. Such intersite factors like doublon-holon binding factors are essential to describe the Mott insulating state <cit.>. Indeed, the DHWF has succeeded in describing the Mott transition for the single-orbital <cit.> and two-orbital <cit.> Hubbard models. The DHWF is given by |Ψ(α_eff)⟩ = P_d P_h P_G | Φ(α_eff)⟩. The Gutzwiller projection operator P_G=∏_r[1-(1-g)P_d r], describes onsite correlations, where P_d r = n_r↑n_r↓ is the projection operator onto the doublon state at r and g is a variational parameter. The parameter g tunes the population of the doubly occupied sites. When the onsite Coulomb interaction is strong and n=1, most sites should be occupied by a single electron each. In this situation, if a doublon is created, a holon should be around it to reduce the energy by using singly occupied virtual states. P_d and P_h describe such doublon-holon binding effects. P_d is an operator to include intersite correlation effects concerning the doublon states. This is defined as follows <cit.>: P_d=∏_r[1-(1-ζ_d) P_d r∏_a (1-P_h r+a) ], where P_h r = (1-n_r↑)(1-n_r↓) is the projection operator onto the holon state at r and a denotes the vectors connecting the nearest-neighbor sites. P_d gives factor ζ_d when site r is in the doublon state and there is no holon at nearest-neighbor sites r+a. Similarly, P_h describing the intersite correlation effects on the holon state is defined as P_h=∏_r[1-(1-ζ_h) P_h r∏_a (1-P_d r+a) ]. Factor ζ_h appears when a holon exists without a nearest-neighboring doublon. For the half-filled case, we can use the relation ζ_d=ζ_h due to the electron-hole symmetry of the model. The one-electron part |Φ(α_eff) ⟩ of the wave function is given by the ground state of the non-interacting Hamiltonian H_0(α_eff) in which α in H_0 is replaced by α_eff. We can choose α_eff different from the original α in the model Hamiltonian. Such a band renormalization effect of the one-electron part is discussed for a Hubbard model with next-nearest-neighbor hopping <cit.>. We define the normal state as |Ψ_N⟩=|Ψ(α_eff=α)⟩, i.e., α_eff remains the bare value. We also define the Weyl semimetallic state as |Ψ_Weyl⟩=|Ψ(α_eff=0.5π)⟩, i.e., all the Weyl points are at the Fermi level and the Fermi surface disappears. In addition, we can choose other values of α_eff, but in a finite-size lattice, a slight change of α_eff does not change the set of the occupied wave numbers and the wave function |Φ(α_eff) ⟩. Thus, we have limited choices for α_eff as the band renormalization in the Hubbard model with the next-nearest-neighbor hopping <cit.>. We use the antiperiodic-periodic boundary conditions since the closed shell condition is satisfied, i.e., no k point is exactly on the Fermi surface for a finite-size lattice and there is no ambiguity to construct |Φ(α_eff)⟩. The calculations are done for L × L lattices with L=12, 14, and 16. We evaluate the expectation value of energy by the Monte Carlo method. We optimize the variational parameters g and ζ_d=ζ_h to minimize the energy. We denote the optimized energy of |Ψ(α_eff) ⟩ as E(α_eff). In particular, we denote E_N=E(α_eff=α) and E_Weyl=E(α_eff=0.5π). By using the Monte Carlo method, we also evaluate the momentum distribution function n(k)=∑_σ⟨ c_kσ^†c_kσ⟩, where ⟨⋯⟩ represents the expectation value in the optimized wave function. In Fig. <ref>(a), we show n(k) in the normal state at α=0.25π for L=16. For U/t̃=10, n(k) has clear discontinuities at the Fermi momenta. On the other hand, for U/t̃=14, n(k) does not have such a discontinuity; that is, the system is insulating and a Mott metal-insulator transition takes place between U/t̃=10 and U/t̃=14. To determine the Mott metal-insulator transition point U_MIT, we evaluate the quasiparticle renormalization factor Z, which is inversely proportional to the effective mass and becomes zero in the Mott insulating state, by the jump in n(k). Except for α=0, we evaluate Z by the jump between (π,0) and (π,π) as shown in Fig. <ref>(a). For α=0, the above path does not intersect the Fermi surface and we use the jump between (π,π) and (0,0) instead. In Fig. <ref>(b), we show the U dependence of Z for α=0.25π and L=16. By extrapolating Z to zero, we determine U_MIT/t̃≃ 12.9. We note that for a small α with a large L, the Mott transition becomes first-order consistent with a previous study for α=0 <cit.>. We have also evaluated energies for some values of α_effα. Figure <ref>(a) shows energies for α_eff=0.18π and 0.22π measured from the normal state energy at α=0.2π for L=16. The normal state has the lowest energy, at least for U/t̃≤ 20. Thus, the renormalization of α, even if it exists, is weak for a system distant from the Weyl semimetallic state (α=0.5π). A similar conclusion is obtained for a small intersite spin-orbit coupling case of the Kane-Mele-Hubbard model <cit.>. It is in contrast to the onsite spin-orbit coupling case <cit.>, where the effective spin-orbit coupling is enhanced by the Coulomb interaction even when the bare spin-orbit coupling is small. On the other hand, the renormalization of α becomes strong around α=0.5π. In Fig. <ref>(b), we show the energy E_Weyl of the Weyl semimetallic state measured from that of the normal state for α=0.4π for L=16. E_Weyl becomes lower than the normal state energy at U>U_Weyl≃ 9.4t̃. There is a possibility that the normal state changes to the Weyl semimetallic state gradually by changing α_eff continuously. However, for a finite lattice, the choices of α_eff are limited between α_eff=α and α_eff=0.5π. For example, at α=0.4π, there is no choice for L=12 and L=14 and only one choice 0.4017<α_eff/π<0.4559 for L=16. For this reason, we evaluate U_Weyl by comparing the energies of the normal and the Weyl semimetallic states to show the tendency toward the Weyl semimetallic state by the renormalization effect on α. Figure <ref> shows a phase diagram without considering magnetic order. The size dependence of the phase boundaries is weak. For a weak Rashba spin-orbit coupling region, i.e., for a small α, the Rashba spin-orbit coupling stabilizes the metallic phase. It is consistent with a previous study by a cluster perturbation theory <cit.>. Around α=0.5π, we obtain a wide region of the Weyl semimetallic phase. Thus, we expect phenomena originating from the Weyl points can be realized even away from α=0.5π with the aid of electron correlations. In the Weyl semimetallic state, the density of states at the Fermi level vanishes, and thus, energy gain is expected similar to the energy gain by a gap opening in an antiferromagnetic transition. We note that such a renormalization effect on α cannot be expected within the Hartree-Fock approximation and is a result of the electron correlations beyond the Hartree-Fock approximation. § HARTREE-FOCK APPROXIMATION FOR MAGNETISM In this section, we discuss the magnetism of the model by the Hartree-Fock approximation. The energy dispersion given in Eq. (<ref>) has the following property: E_±(k+Q)=-E_∓(k) for Q=(π,π). When E_a(k)=0, in particular, E_-a(k+Q)=E_a(k)=0. Thus, the Fermi surface is perfectly nested for half-filling (the Fermi energy is zero) with the nesting vector Q=(π,π) [see Figs. <ref>(a)–(c)]. Due to this nesting, the magnetic susceptibility at Q=(π,π) diverges at zero temperature <cit.>. It indicates that the magnetic order occurs with an infinitesimally small value of the Coulomb interaction U at zero temperature. However, some recent Hartree-Fock studies argue the existence of the paramagnetic phase with finite U <cit.>. To resolve this contradiction and gain insights into magnetism, we apply the Hartree-Fock approximation to the model within two-sublattice magnetic order, i.e., with ordering vector of Q=(π,π) or Q=(π,0). The Hartree-Fock Hamiltonian is given by H_HF = ∑_k[ c_k^† c_k+Q^† ][ ϵ̂(k) -Δ·σ; -Δ·σ ϵ̂(k+Q) ][ c_k; c_k+Q ], where k-summation runs over the folded Brillouin zone of the antiferromagnetic state, c_k=(c_k↑,c_k↓)^T, ϵ̂(k)=ϵ_kI+H_R(k), and Δ=Um_AF. Here, m_AF=[1/(2L^2)]∑_rσσ'e^-iQ·r⟨ c_rσ^†σ_σσ' c_rσ'⟩_HF, where ⟨⋯⟩_HF represents the expectation value in the ground state of H_HF. We solve the gap equation Δ=Um_AF self-consistently. First, we consider the magnetic order for Q=(π,π). Without the Rashba spin-orbit coupling, the asymptotic form m_AF=|m_AF|∼ (t̃/U)e^-2π√(t̃/U) for the weak-coupling region Δ=|Δ| ≪ W was obtained by Hirsch analyzing the gap equation <cit.>. If we take into consideration the fact that the asymptotic form of the density of states ρ(ϵ) ≃ -[1/(2π^2 t̃)] ln [|ϵ|/(16t̃)] for ϵ≃ 0 <cit.> is a good approximation even up to the band edge [see Fig. <ref>(d)], we obtain m ≃ (32t̃/U)e^-2π√(t̃/U). Indeed, this approximate form reproduces the numerical data well in the weak-coupling region as shown in Fig. <ref>(a). For a finite λ_R, we find numerically that m_AF is parallel to the x or y direction. It is expected from the effective Hamiltonian in the strong coupling limit we will discuss later. By assuming Δ≪λ_R and Δ≪ W, we obtain m_AF∼ (t̃/U)e^-2/[Uρ(0)] for a finite ρ(0), where ρ(0) is the density of states at the Fermi level. The coefficient to m_AF is determined by the entire behavior of the density of states up to the band edge [see Figs. <ref>(e) and <ref>(f)] and we cannot obtain it analytically in general. Figures <ref>(b) and <ref>(c) show the numerically obtained m_AF for α=0.2π and 0.4π, respectively, along with the fitted curves of (at̃/U)e^-2/[Uρ(0)], where a is the fitting parameter. The fitted curves reproduce well the numerical data in the weak-coupling region. From the obtained asymptotic form and the numerical data supporting it, we conclude that the magnetic order occurs by an infinitesimally small U for 0 ≤α < 0.5π consistent with the divergence of the magnetic susceptibility <cit.>. We cannot apply this asymptotic form for α=0.5π since ρ(0)=0 there. The numerical result shown in Fig. <ref>(d) indicates a first-order transition for α=0.5π. Here, we discuss previous papers indicating the existence of the paramagnetic phase with finite U. In Ref. <cit.>, the authors introduced a threshold ε for the magnetization m_AF. Then, the authors determined the magnetic transition point when m_AF becomes smaller than ε. However, m_AF becomes exponentially small in the weak-coupling region, as understood from the above analysis. In Ref. <cit.>, ε is not sufficiently small to discuss exponentially small m_AF and a finite region of the paramagnetic phase was obtained. In Ref. <cit.>, the authors calculated the energy difference Δ E between the paramagnetic state and the antiferromagnetic state. Then, the authors introduced a scaling between Δ E and U-U_AF, where U_AF is the antiferromagnetic transition point. They tuned U_AF to collapse the data with different α onto a single curve in a large-U region. Then, they obtained finite U_AF for α 0. However, this scaling analysis does not have a basis. In particular, if such a scaling holds for critical behavior, the data collapse should occur for U ≃ U_AF, not for a large-U region. We have also solved the gap equation for Q=(π,0) and obtained m_AF parallel to the y direction. By comparing energies for Q=(π,π) and Q=(π,0), we construct a phase diagram shown in Fig. <ref>. As noted, the antiferromagnetic state with Q=(π,π) occurs at infinitesimally small U except for α=0.5π. The Weyl semimetallic state remains for U/ t̃≲ 4.4 at α=0.5π. The antiferromagnetic state with Q=(π,0) appears at large U for α/π≳ 0.2. This phase boundary can be understood from the effective Hamiltonian in the strong coupling limit. The effective Hamiltonian is derived from the second-order perturbation theory concerning t and λ_R and is given by <cit.> H_eff = ∑_raμ[ J^μ_aS_r^μS_r+a^μ +D_a^μ(S_r×S_r+a)^μ], where a=x̂ or ŷ, μ=x, y, or z, S_r is the spin operator at site r, J_x̂^x =J_x̂^z =J_ŷ^y =J_ŷ^z = 4(t^2-λ_R^2)/U, J_x̂^y =J_ŷ^x = 4(t^2+λ_R^2)/U, D_x̂^y =-D_ŷ^x =8tλ_R/U, and the other components of D_a are zero. From the anisotropy in the interaction, we expect the ordered moments along the x or y direction for Q=(π,π) and along the y direction for Q=(π,0). Thus, the directions of the ordered moments obtained with the Hartree-Fock approximation are in accord with the effective Hamiltonian. For t ≪λ_R (α≃ 0), the magnetic order with Q=(π,π) is stable as in the ordinary Heisenberg model. For t ≫λ_R (α≃ 0.5π), the magnetic order with Q=(π,0) has lower energy than that with Q=(π,π) due to the anisotropic interaction. For t=λ_R (J_x̂^x =J_x̂^z =J_ŷ^y =J_ŷ^z=0), if we ignore the Dzyaloshinskii-Moriya interaction D_a, the model is reduced to the compass model <cit.>. It is known as a highly frustrated model. The condition t=λ_R corresponds to α=tan^-1(1/√(2))=0.1959π. Thus, the phase boundary α≃ 0.2π obtained with the Hartree-Fock approximation at a large-U region is corresponding to the highly frustrated region of the model. However, in a large-U region, we expect longer-period magnetic order due to the Dzyaloshinskii-Moriya interaction. It is out of the scope of the present study and has already been investigated by previous studies using the effective Hamiltonian <cit.>. Our important finding in this section is the absence of the paramagnetic phase except for α=0.5π in the weak-coupling region. However, the ordered moment and the energy gain of the antiferromagnetic state in the weak-coupling region are exponentially small. Thus, the effects of this magnetic order should be weak. In addition, this magnetic order would be easily destroyed by perturbations such as the next-nearest-neighbor hopping breaking the nesting condition <cit.>. Thus, the discussions in the previous sections without considering magnetic order are still meaningful. § SUMMARY We have investigated the Rashba-Hubbard model on a square lattice. The Rashba spin-orbit coupling generates the two-dimensional Weyl points, which are characterized by non-zero winding numbers. We have investigated lattices with edges and found zero-energy states on a lattice with zigzag edges. The zero-energy states are localized around the edges and have a helical character. The large density of states due to the flat zero-energy band may result in magnetic polarization at edges, similar to graphene <cit.>. We have also examined the effects of the Coulomb interaction U. The Coulomb interaction renormalizes the ratio of the coupling constant of the Rashba spin-orbit coupling λ_R to the hopping integral t effectively. As a result, the Weyl points can move to the Fermi level by the correlation effects. Thus, the Coulomb interaction can enhance the effects of the Weyl points and assist in observing phenomena originating from the Weyl points even if the bare Rashba spin-orbit coupling is not large. We have also investigated the magnetism of the model by the Hartree-Fock approximation. We have found that the antiferromagnetic state with the ordering vector Q=(π,π) occurs at infinitesimally small U due to the perfect nesting of the Fermi surface even for a finite λ_R. However, the density of states at the Fermi level becomes small for a large λ_R and as a result, the energy gain by the antiferromagnetic order is small in the weak-coupling region. Therefore, the effects of the magnetic order should be weak in such a region. In addition, this magnetic order would be unstable against perturbations, such as the inclusion of next-nearest-neighbor hopping <cit.>. Thus, we conclude that the discussions on the Weyl semimetal without assuming magnetism are still meaningful. This work was supported by JSPS KAKENHI Grant Number JP23K03330. 53 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Bychkov and Rashba(1984)]Bychkov1984 author author Y. A. Bychkov and author E. I. Rashba, title title Properties of a 2D electron gas with lifted spectral degeneracy, @noop journal journal JETP Lett. volume 39, pages 78 (year 1984)NoStop [Datta and Das(1990)]Datta1990 author author S. Datta and author B. Das, title title Electronic analog of the electro-optic modulator, https://doi.org/10.1063/1.102730 journal journal Appl. Phys. Lett. volume 56, pages 665 (year 1990)NoStop [Schultz et al.(1996)Schultz, Heinrichs, Merkt, Colin, Skauli, and Løvold]Schultz1996 author author M. Schultz, author F. Heinrichs, author U. Merkt, author T. Colin, author T. Skauli, and author S. Løvold, title title Rashba spin splitting in a gated HgTe quantum well, https://doi.org/10.1088/0268-1242/11/8/009 journal journal Semicond. Sci. Technol. volume 11, pages 1168 (year 1996)NoStop [Nitta et al.(1997)Nitta, Akazaki, Takayanagi, and Enoki]Nitta1997 author author J. Nitta, author T. Akazaki, author H. Takayanagi, and author T. Enoki, title title Gate Control of Spin-Orbit Interaction in an Inverted In0.53Ga0.47As/In0.52Al0.48As Heterostructure, https://doi.org/10.1103/PhysRevLett.78.1335 journal journal Phys. Rev. Lett. volume 78, pages 1335 (year 1997)NoStop [Engels et al.(1997)Engels, Lange, Schäpers, and Lüth]Engels1997 author author G. Engels, author J. Lange, author T. Schäpers, and author H. Lüth, title title Experimental and theoretical approach to spin splitting in modulation-doped InxGa1-xAs/InP quantum wells for B→0, https://doi.org/10.1103/PhysRevB.55.R1958 journal journal Phys. Rev. B volume 55, pages R1958 (year 1997)NoStop [Sinova et al.(2004)Sinova, Culcer, Niu, Sinitsyn, Jungwirth, and MacDonald]Sinova2004 author author J. Sinova, author D. Culcer, author Q. Niu, author N. A. Sinitsyn, author T. Jungwirth, and author A. H. MacDonald, title title Universal Intrinsic Spin Hall Effect, https://doi.org/10.1103/PhysRevLett.92.126603 journal journal Phys. Rev. Lett. volume 92, pages 126603 (year 2004)NoStop [Inoue et al.(2004)Inoue, Bauer, and Molenkamp]Inoue2004 author author J.-i. Inoue, author G. E. W. Bauer, and author L. W. Molenkamp, title title Suppression of the persistent spin Hall current by defect scattering, https://doi.org/10.1103/PhysRevB.70.041303 journal journal Phys. Rev. B volume 70, pages 041303(R) (year 2004)NoStop [Chalaev and Loss(2005)]Chalaev2005 author author O. Chalaev and author D. Loss, title title Spin-Hall conductivity due to Rashba spin-orbit interaction in disordered systems, https://doi.org/10.1103/PhysRevB.71.245318 journal journal Phys. Rev. B volume 71, pages 245318 (year 2005)NoStop [Dimitrova(2005)]Dimitrova2005 author author O. V. Dimitrova, title title Spin-Hall conductivity in a two-dimensional Rashba electron gas, https://doi.org/10.1103/PhysRevB.71.245327 journal journal Phys. Rev. B volume 71, pages 245327 (year 2005)NoStop [Sugimoto et al.(2006)Sugimoto, Onoda, Murakami, and Nagaosa]Sugimoto2006 author author N. Sugimoto, author S. Onoda, author S. Murakami, and author N. Nagaosa, title title Spin Hall effect of a conserved current: Conditions for a nonzero spin Hall current, https://doi.org/10.1103/PhysRevB.73.113305 journal journal Phys. Rev. B volume 73, pages 113305 (year 2006)NoStop [Dugaev et al.(2010)Dugaev, Inglot, Sherman, and Barnaś]Dugaev2010 author author V. K. Dugaev, author M. Inglot, author E. Y. Sherman, and author J. Barnaś, title title Robust impurity-scattering spin Hall effect in a two-dimensional electron gas, https://doi.org/10.1103/PhysRevB.82.121310 journal journal Phys. Rev. B volume 82, pages 121310(R) (year 2010)NoStop [Shitade and Tatara(2022)]Shitade2022 author author A. Shitade and author G. Tatara, title title Spin accumulation without spin current, https://doi.org/10.1103/PhysRevB.105.L201202 journal journal Phys. Rev. B volume 105, pages L201202 (year 2022)NoStop [Gor'kov and Rashba(2001)]Gorkov2001 author author L. P. Gor'kov and author E. I. Rashba, title title Superconducting 2D System with Lifted Spin Degeneracy: Mixed Singlet-Triplet State, https://doi.org/10.1103/PhysRevLett.87.037004 journal journal Phys. Rev. Lett. volume 87, pages 037004 (year 2001)NoStop [Yanase and Sigrist(2008)]Yanase2008 author author Y. Yanase and author M. Sigrist, title title Superconductivity and Magnetism in Non-centrosymmetric System: Application to CePt3Si, https://doi.org/10.1143/JPSJ.77.124711 journal journal J. Phys. Soc. Jpn. volume 77, pages 124711 (year 2008)NoStop [Beyer et al.(2023)Beyer, Hauck, Klebl, Schwemmer, Kennes, Thomale, Honerkamp, and Rachel]Beyer2023 author author J. Beyer, author J. B. Hauck, author L. Klebl, author T. Schwemmer, author D. M. Kennes, author R. Thomale, author C. Honerkamp, and author S. Rachel, title title Rashba spin-orbit coupling in the square-lattice Hubbard model: A truncated-unity functional renormalization group study, https://doi.org/10.1103/PhysRevB.107.125115 journal journal Phys. Rev. B volume 107, pages 125115 (year 2023)NoStop [Cocks et al.(2012)Cocks, Orth, Rachel, Buchhold, Le Hur, and Hofstetter]Cocks2012 author author D. Cocks, author P. P. Orth, author S. Rachel, author M. Buchhold, author K. Le Hur, and author W. Hofstetter, title title Time-Reversal-Invariant Hofstadter-Hubbard Model with Ultracold Fermions, https://doi.org/10.1103/PhysRevLett.109.205303 journal journal Phys. Rev. Lett. volume 109, pages 205303 (year 2012)NoStop [Radić et al.(2012)Radić, Di Ciolo, Sun, and Galitski]Radic2012 author author J. Radić, author A. Di Ciolo, author K. Sun, and author V. Galitski, title title Exotic Quantum Spin Models in Spin-Orbit-Coupled Mott Insulators, https://doi.org/10.1103/PhysRevLett.109.085303 journal journal Phys. Rev. Lett. volume 109, pages 085303 (year 2012)NoStop [Gong et al.(2015)Gong, Qian, Yan, Scarola, and Zhang]Gong2015 author author M. Gong, author Y. Qian, author M. Yan, author V. W. Scarola, and author C. Zhang, title title Dzyaloshinskii-Moriya Interaction and Spiral Order in Spin-orbit Coupled Optical Lattices, https://doi.org/10.1038/srep10050 journal journal Sci Rep volume 5, pages 10050 (year 2015)NoStop [Minář and Grémaud(2013)]Minar2013 author author J. Minář and author B. Grémaud, title title From antiferromagnetic ordering to magnetic textures in the two-dimensional Fermi-Hubbard model with synthetic spin-orbit interactions, https://doi.org/10.1103/PhysRevB.88.235130 journal journal Phys. Rev. B volume 88, pages 235130 (year 2013)NoStop [Kennedy et al.(2022)Kennedy, dos Anjos Sousa-Júnior, Costa, and dos Santos]Kennedy2022 author author W. Kennedy, author S. dos Anjos Sousa-Júnior, author N. C. Costa, and author R. R. dos Santos, title title Magnetism and metal-insulator transitions in the Rashba-Hubbard model, https://doi.org/10.1103/PhysRevB.106.165121 journal journal Phys. Rev. B volume 106, pages 165121 (year 2022)NoStop [Kawano and Hotta(2023)]Kawano2023 author author M. Kawano and author C. Hotta, title title Phase diagram of the square-lattice Hubbard model with Rashba-type antisymmetric spin-orbit coupling, https://doi.org/10.1103/PhysRevB.107.045123 journal journal Phys. Rev. B volume 107, pages 045123 (year 2023)NoStop [Zhang et al.(2015)Zhang, Wu, Li, Wen, Sun, and Ji]Zhang2015 author author X. Zhang, author W. Wu, author G. Li, author L. Wen, author Q. Sun, and author A.-C. Ji, title title Phase diagram of interacting Fermi gas in spin–orbit coupled square lattices, https://doi.org/10.1088/1367-2630/17/7/073036 journal journal New J. Phys. volume 17, pages 073036 (year 2015)NoStop [Brosco and Capone(2020)]Brosco2020 author author V. Brosco and author M. Capone, title title Rashba-metal to Mott-insulator transition, https://doi.org/10.1103/PhysRevB.101.235149 journal journal Phys. Rev. B volume 101, pages 235149 (year 2020)NoStop [Mireles and Kirczenow(2001)]Mireles2001 author author F. Mireles and author G. Kirczenow, title title Ballistic spin-polarized transport and Rashba spin precession in semiconductor nanowires, https://doi.org/10.1103/PhysRevB.64.024426 journal journal Phys. Rev. B volume 64, pages 024426 (year 2001)NoStop [Hou(2013)]Hou2013 author author J.-M. Hou, title title Hidden-Symmetry-Protected Topological Semimetals on a Square Lattice, https://doi.org/10.1103/PhysRevLett.111.130403 journal journal Phys. Rev. Lett. volume 111, pages 130403 (year 2013)NoStop [Sun et al.(2012)Sun, Liu, Hemmerich, and Das Sarma]Sun2012 author author K. Sun, author W. V. Liu, author A. Hemmerich, and author S. Das Sarma, title title Topological semimetal in a fermionic optical lattice, https://doi.org/10.1038/nphys2134 journal journal Nat. Phys. volume 8, pages 67 (year 2012)NoStop [Berry(1984)]Berry1984 author author M. V. Berry, title title Quantal phase factors accompanying adiabatic changes, https://doi.org/10.1098/rspa.1984.0023 journal journal Proc. R. Soc. London, Ser. A volume 392, pages 45 (year 1984)NoStop [Fujita et al.(1996)Fujita, Wakabayashi, Nakada, and Kusakabe]Fujita1996 author author M. Fujita, author K. Wakabayashi, author K. Nakada, and author K. Kusakabe, title title Peculiar Localized State at Zigzag Graphite Edge, https://doi.org/10.1143/JPSJ.65.1920 journal journal J. Phys. Soc. Jpn. volume 65, pages 1920 (year 1996)NoStop [Ryu and Hatsugai(2002)]Ryu2002 author author S. Ryu and author Y. Hatsugai, title title Topological Origin of Zero-Energy Edge States in Particle-Hole Symmetric Systems, https://doi.org/10.1103/PhysRevLett.89.077002 journal journal Phys. Rev. Lett. volume 89, pages 077002 (year 2002)NoStop [Hatsugai(2009)]Hatsugai2009 author author Y. Hatsugai, title title Bulk-edge correspondence in graphene with/without magnetic field: Chiral symmetry, Dirac fermions and edge states, https://doi.org/10.1016/j.ssc.2009.02.055 journal journal Solid State Commun. volume 149, pages 1061 (year 2009)NoStop [Schnyder et al.(2008)Schnyder, Ryu, Furusaki, and Ludwig]Schnyder2008 author author A. P. Schnyder, author S. Ryu, author A. Furusaki, and author A. W. W. Ludwig, title title Classification of topological insulators and superconductors in three spatial dimensions, https://doi.org/10.1103/PhysRevB.78.195125 journal journal Phys. Rev. B volume 78, pages 195125 (year 2008)NoStop [Kitaev(2009)]Kitaev2009 author author A. Kitaev, title title Periodic table for topological insulators and superconductors, https://doi.org/10.1063/1.3149495 journal journal AIP Conf. Proc. volume 1134, pages 22 (year 2009)NoStop [Ryu et al.(2010)Ryu, Schnyder, Furusaki, and Ludwig]Ryu2010 author author S. Ryu, author A. P. Schnyder, author A. Furusaki, and author A. W. W. Ludwig, title title Topological insulators and superconductors: Tenfold way and dimensional hierarchy, https://doi.org/10.1088/1367-2630/12/6/065010 journal journal New J. Phys. volume 12, pages 065010 (year 2010)NoStop [Yokoyama and Shiba(1987)]Yokoyama1987 author author H. Yokoyama and author H. Shiba, title title Variational Monte-Carlo Studies of Hubbard Model. I, https://doi.org/10.1143/JPSJ.56.1490 journal journal J. Phys. Soc. Jpn. volume 56, pages 1490 (year 1987)NoStop [Kaplan et al.(1982)Kaplan, Horsch, and Fulde]Kaplan1982 author author T. A. Kaplan, author P. Horsch, and author P. Fulde, title title Close Relation between Localized-Electron Magnetism and the Paramagnetic Wave Function of Completely Itinerant Electrons, https://doi.org/10.1103/PhysRevLett.49.889 journal journal Phys. Rev. Lett. volume 49, pages 889 (year 1982)NoStop [Yokoyama and Shiba(1990)]Yokoyama1990 author author H. Yokoyama and author H. Shiba, title title Variational Monte-Carlo Studies of Hubbard Model. III. Intersite Correlation Effects, https://doi.org/10.1143/JPSJ.59.3669 journal journal J. Phys. Soc. Jpn. volume 59, pages 3669 (year 1990)NoStop [Yokoyama(2002)]Yokoyama2002 author author H. Yokoyama, title title Variational Monte Carlo Studies of Attractive Hubbard Model. I, https://doi.org/10.1143/PTP.108.59","inLanguage":"en","copyrightHolder":"The journal journal Prog. Theor. Phys. volume 108, pages 59 (year 2002)NoStop [Capello et al.(2006)Capello, Becca, Yunoki, and Sorella]Capello2006 author author M. Capello, author F. Becca, author S. Yunoki, and author S. Sorella, title title Unconventional metal-insulator transition in two dimensions, https://doi.org/10.1103/PhysRevB.73.245116 journal journal Phys. Rev. B volume 73, pages 245116 (year 2006)NoStop [Watanabe et al.(2006)Watanabe, Yokoyama, Tanaka, and Inoue]Watanabe2006 author author T. Watanabe, author H. Yokoyama, author Y. Tanaka, and author J.-i. Inoue, title title Superconductivity and a Mott Transition in a Hubbard Model on an Anisotropic Triangular Lattice, https://doi.org/10.1143/JPSJ.75.074707 journal journal J. Phys. Soc. Jpn. volume 75, pages 074707 (year 2006)NoStop [Yokoyama et al.(2006)Yokoyama, Ogata, and Tanaka]Yokoyama2006 author author H. Yokoyama, author M. Ogata, and author Y. Tanaka, title title Mott Transitions and d-Wave Superconductivity in Half-Filled-Band Hubbard Model on Square Lattice with Geometric Frustration, https://doi.org/10.1143/JPSJ.75.114706 journal journal J. Phys. Soc. Jpn. volume 75, pages 114706 (year 2006)NoStop [Onari et al.(2007)Onari, Yokoyama, and Tanaka]Onari2007 author author S. Onari, author H. Yokoyama, and author Y. Tanaka, title title Phase diagram of half-filled square lattice for frustrated Hubbard model, https://doi.org/10.1016/j.physc.2007.05.017 journal journal Physica C volume 463–465, pages 120 (year 2007)NoStop [Koga et al.(2006)Koga, Kawakami, Yokoyama, and Kobayashi]Koga2006 author author A. Koga, author N. Kawakami, author H. Yokoyama, and author K. Kobayashi, title title Variational Monte Carlo Study of Two Dimensional Multi-Orbital Hubbard Model, https://doi.org/10.1063/1.2355252 journal journal AIP Conf. Proc. volume 850, pages 1458 (year 2006)NoStop [Takenaka and Kawakami(2012)]Takenaka2012 author author Y. Takenaka and author N. Kawakami, title title Variational Monte Carlo Study of Two-Dimensional Multi-Orbital Hubbard Model on Square Lattice, https://doi.org/10.1088/1742-6596/400/3/032099 journal journal J. Phys.: Conf. Ser. volume 400, pages 032099 (year 2012)NoStop [Kubo(2021)]Kubo2021 author author K. Kubo, title title Destabilization of ferromagnetism by frustration and realization of a nonmagnetic Mott transition in the quarter-filled two-orbital Hubbard model, https://doi.org/10.1103/PhysRevB.103.085118 journal journal Phys. Rev. B volume 103, pages 085118 (year 2021)NoStop [Kubo(2022)]Kubo2022 author author K. Kubo, title title Enhanced Spin–Orbit Coupling in a Correlated Metal, https://doi.org/10.7566/JPSJ.91.124707 journal journal J. Phys. Soc. Jpn. volume 91, pages 124707 (year 2022)NoStop [Kubo(2023)]Kubo2023 author author K. Kubo, title title Enhancement of an Effective Spin-Orbit Coupling in a Correlated Metal, https://doi.org/10.7566/JPSCP.38.011161 journal journal JPS Conf. Proc. volume 38, pages 011161 (year 2023)NoStop [Sato and Yokoyama(2016)]Sato2016 author author R. Sato and author H. Yokoyama, title title Band-Renormalization Effects and Predominant Antiferromagnetic Order in Two-Dimensional Hubbard Model, https://doi.org/10.7566/JPSJ.85.074701 journal journal J. Phys. Soc. Jpn. volume 85, pages 074701 (year 2016)NoStop [Richter et al.(2021)Richter, Graspeuntner, Schäfer, Wentzell, and Aichhorn]Richter2021 author author M. Richter, author J. Graspeuntner, author T. Schäfer, author N. Wentzell, and author M. Aichhorn, title title Comparing the effective enhancement of local and nonlocal spin-orbit couplings on honeycomb lattices due to strong electronic correlations, https://doi.org/10.1103/PhysRevB.104.195107 journal journal Phys. Rev. B volume 104, pages 195107 (year 2021)NoStop [Liu et al.(2023)Liu, You, Gu, Maekawa, and Su]Liu2023 author author Z. Liu, author J.-Y. You, author B. Gu, author S. Maekawa, and author G. Su, title title Enhanced spin-orbit coupling and orbital moment in ferromagnets by electron correlations, https://doi.org/10.1103/PhysRevB.107.104407 journal journal Phys. Rev. B volume 107, pages 104407 (year 2023)NoStop [Jiang(2023)]Jiang2023 author author K. Jiang, title title Correlation Renormalized and Induced Spin-Orbit Coupling, https://doi.org/10.1088/0256-307X/40/1/017102 journal journal Chin. Phys. Lett. volume 40, pages 017102 (year 2023)NoStop [Hirsch(1985)]Hirsch1985 author author J. E. Hirsch, title title Two-dimensional Hubbard model: Numerical simulation study, @noop journal journal Phys. Rev. B volume 31, pages 4403 (year 1985)NoStop [Fazekas(1999)]Fazekas1999 author author P. Fazekas, https://doi.org/10.1142/2945 title Lecture Notes on Electron Correlation and Magnetism, series Series in Modern Condensed Matter Physics, Vol. volume 5 (publisher World Scientific, year 1999)NoStop [Kugel and Khomskii(1982)]Kugel1982 author author K. I. Kugel and author D. I. Khomskii, title title The Jahn-Teller effect and magnetism: Transition metal compounds, https://doi.org/10.1070/PU1982v025n04ABEH004537 journal journal Sov. Phys. Usp. volume 25, pages 231 (year 1982)NoStop
http://arxiv.org/abs/2307.07574v1
20230714183757
Sparsified Simultaneous Confidence Intervals for High-Dimensional Linear Models
[ "Xiaorui Zhu", "Yichen Qin", "Peng Wang" ]
stat.ME
[ "stat.ME", "econ.EM", "stat.ML", "62fxx" ]
#1 1 0 Sparsified Simultaneous Confidence Intervals for High-Dimensional Linear Models Xiaorui Zhu, Yichen Qin, and Peng WangXiaorui Zhu is Assistant Professor in the Department of Business Analytics & Technology Management, Towson University. Yichen Qin is Associate Professor in the Department of Operations, Business Analytics, and Information Systems, University of Cincinnati. Peng Wang is Associate Professor in the Department of Operations, Business Analytics, and Information Systems, University of Cincinnati. University of Cincinnati April 20, 2022 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== 1 Sparsified Simultaneous Confidence Intervals for High-Dimensional Linear Models Statistical inference of the high-dimensional regression coefficients is challenging because the uncertainty introduced by the model selection procedure is hard to account for. A critical question remains unsettled; that is, is it possible and how to embed the inference of the model into the simultaneous inference of the coefficients? To this end, we propose a notion of simultaneous confidence intervals called the sparsified simultaneous confidence intervals. Our intervals are sparse in the sense that some of the intervals’ upper and lower bounds are shrunken to zero (i.e., [0,0]), indicating the unimportance of the corresponding covariates. These covariates should be excluded from the final model. The rest of the intervals, either containing zero (e.g., [-1,1] or [0,1]) or not containing zero (e.g., [2,3]), indicate the plausible and significant covariates, respectively. The proposed method can be coupled with various selection procedures, making it ideal for comparing their uncertainty. For the proposed method, we establish desirable asymptotic properties, develop intuitive graphical tools for visualization, and justify its superior performance through simulation and real data analysis. (Original version:) In this article, we propose a new type of simultaneous confidence intervals — sparsified simultaneous confidence intervals — for inference of coefficients in the high-dimensional linear models. Our intervals are sparse in the sense that some of intervals’ upper and lower bounds are shrunken to zero, indicating the unimportance of the corresponding covariates. The rest of intervals, either containing or not containing zero, indicate the plausible and significant covariates, respectively. Therefore, not only can our intervals measure the estimation uncertainty and infer the true coefficients, it also divides the covariates into three groups and offers insights about the true model. The proposed method can be coupled with various estimation procedures, making it ideal for evaluating and comparing their uncertainty. We establish desirable asymptotic properties of the proposed method under various settings and develop an intuitive graphical tool for visualization. Finally, we justify the superior performance of the proposed method through simulation studies and real data analysis. Keywords: high-dimensional inference, model confidence bounds, simultaneous confidence intervals, selection uncertainty. 1.5 § INTRODUCTION High-dimensional data analysis plays an important role in modern scientific discoveries. There has been extensive work on high-dimensional variable selection and estimation using penalized regressions, such as Lasso <cit.>, SCAD <cit.>, MCP <cit.>, and selection by partitioning solution paths <cit.>. In recent years, inference for the true regression coefficients and the true model began to attract attention. A major challenge of high-dimensional inference is how to quantify the uncertainty of the coefficient estimate because such uncertainty depends on two components, the uncertainty in parameter estimation given the selected model, the uncertainty in selecting the model, both of which are difficult to estimate and are actively studied. For inference of the regression coefficients, <cit.> introduces the notion of simultaneous confidence intervals, which is a sequence of intervals containing the true coefficients at a given probability. For the high-dimensional linear models, <cit.> and <cit.> construct the simultaneous confidence intervals using the debiased Lasso approach <cit.>. <cit.> also utilize the debiased Lasso to develop an inference tool. Another route to achieve this objective is the simultaneous confidence region <cit.>, but the boundary of the simultaneous confidence region is a function of all coefficients, making it hard to interpret and visualize. There is also a parallel stream of research in post-selection intervals for regression coefficients for high-dimensional linear models <cit.>. However, their focus is on the coefficients in the selected model, while our target is the entire regression coefficients. Besides, <cit.> introduces bootstrap lasso + partial ridge estimator to construct the individual confidence interval, but their emphasis is not on the simultaneous confidence intervals. Other inference tools include the multiple simultaneous testing proposed by <cit.> and the adaptive confidence interval in a study of <cit.>. Many of the aforementioned simultaneous confidence intervals are often too wide and have non-zero widths for all covariates regardless of their significance. So the estimation uncertainty cannot be efficiently reflected. To conduct the simultaneous inference for regression coefficients, Chatterjee and Lahiri (2011) propose a modified Lasso-based bootstrap method to build the simultaneous confidence region. Dezeure et al. (2017) and Zhang and Cheng (2017) extend the state-of-the-art debiased approach (Geer et al., 2014; Zhang and Zhang, 2014; Javanmard and Montanari, 2014) to construct simultaneous confidence intervals for multiple regression parameters. Lai et al. (2015) propose to use fiducial inference to construct confidence intervals to account for the variability introduced by model selection. In addition, when the weak signal covariates are involved, Shi and Qu (2017) propose to use the least-square confidence interval to conduct an inference of the coefficients. Liu et al. (2020) introduce the bootstrap lasso+partial ridge approach to relax the beta-min assumption and offer tighter confidence intervals with higher coverage rates. Many of the aforementioned simultaneous confidence intervals are often conservative and have non-zero widths for all intervals regardless of the significance of coefficients. So the estimation uncertainty cannot be efficiently reflected. For the inference of the true model, <cit.> introduces variable selection confidence set by constructing a set of models that contain the true model with a given confidence level <cit.>. Through the size of the model set, this inference tool characterizes the model selection uncertainty. <cit.> propose to construct the lower bound model and upper bound model such that the true model is trapped in between at a pre-specified confidence level <cit.>. <cit.> propose to use a hierarchical testing procedure to form the model confidence set for prediction. Besides, <cit.> propose variable selection deviation measures for the quantification of model selection uncertainty. Although these methods show promise in conducting inference for the true model in low-dimensional cases, their performances for high-dimensional models are often unsatisfactory. To conduct the inference of the true model, Hansen et al. (2011) propose to use a hierarchical testing procedure to form the model confidence set for prediction. Ferrari and Yang (2015) propose to construct the variable selection confidence set by a sequence of F-tests for linear regressions and likelihood ratio testings for generalized linear models (Zheng et al., 2019a,b) while allowing p and n to grow at the same time. Li et al. (2019) propose to construct the lower bound model and upper bound model such that the true model is trapped in between at a pre-specified confidence level for linear regressions and graphical models (Wang et al., 2021; Qin and Wang, 2021). Although these methods show promise in conducting inference for model selection in low dimensional cases, their performances for high dimensional models are not satisfactory. Meanwhile, aiming to reduce selection uncertainty in high-dimensional settings, Meinshausen and Bühlmann (2010) propose stability selection to improve upon existing selection methods using a subsampling approach. Given the importance of inference for both the true regression coefficients and the true model, a critical question remains unsettled; that is, is it possible and how to embed the inference of the model into the inference of parameter? To address this issue, we propose a new notion of the simultaneous confidence intervals, termed sparsified simultaneous confidence intervals. Specifically, we construct a sequence of confidence intervals { [ β_j , β_j] }_j=1^p which simultaneously contain the true coefficients with 1-α confidence level. Our method is sparse in the sense that some of its intervals' upper and lower bounds are shrunken to zero (i.e., [0,0]), meaning that the corresponding covariates are unimportant and should be excluded from the final model. The other intervals, either containing zero (e.g., [-1,1] or [0,1]) or not containing zero (e.g., [2,3]), classify the corresponding covariates into the plausible ones and significant ones, respectively. The plausible covariates are weakly associated with the response variable. The significant covariates are strongly associated with the response variable and should be included in the final model. Therefore, the proposed intervals offer more information about the true coefficient vector than classical non-sparse simultaneous confidence intervals. In addition, the proposed method naturally provides two nested models, a lower bound model (that includes all the significant covariates) and an upper bound model (that includes all the significant and plausible covariates), so that the true model is trapped in between them at the same confidence level. For illustrative purposes, we present an example to compare our method to <cit.>'s method, simultaneous confidence interval by debiased Lasso. We simulate 50 observations from the linear model 𝐲 = 𝐗β^0 + ε where β^0=(3,3,3,2,2,1,1 ,0 ,...,0)^T ∈ℝ^60 and both covariates and random error are standard normal. We construct both types of confidence intervals at 95% confidence level in Figure <ref>. The true coefficient vector is in red, and the confidence intervals are in dark blue. Although both methods contain the true coefficient vector, our method is significantly narrower and presents more insights into the true coefficients and model. For example, the unimportant covariates in the blue shaded area should be excluded from the final model. The plausible covariates are plotted in a grey shaded area, for which we do not have enough evidence to decide to include or exclude in the model. However, by the signs of the intervals, we at least can infer they all have positive or zero effects. The significant covariates in red labels should be included in the model. In contrast, <cit.> 's method carries less model information since its widths are the same, ignoring the difference in estimation uncertainty. This article contributes to the literature in the following aspects. We integrate the sparsity feature into the simultaneous confidence intervals to provide insights on model selection uncertainty besides carrying out inference for regression coefficients. Under a consistent model selection procedure, we have established the asymptotic coverage probability. We develop a graphical representation of the proposed method to enhance the visualization for both parameter and model uncertainty. In addition to satisfactory performance in typical settings, we numerically show that the proposed simultaneous confidence intervals perform well in the weak signal settings compared to the existing methods. The article is organized as follows. In Section <ref>, we introduce our sparsified simultaneous confidence intervals, establish its theoretical properties, develop its graphical presentation, and discuss its connections to other methods. We explain the key ingredient in our approach, selection by partitioning solution paths, in Section <ref>. Numerical experiments and real data examples evidence the advantages of our approach in Sections <ref> and <ref>. We conclude in Section <ref> and relegate proofs to the supplementary materials. § INFERENCE FOR HIGH-DIMENSIONAL LINEAR MODELS Throughout the article, we focus on the inference tasks for the linear model 𝐲 = 𝐗β^0 + ε where ε∼ N(0, σ^2𝐈_n), 𝐲∈ℝ^n is the response vector, and 𝐗∈ℝ^n × p is the fixed design matrix containing p covariates. The parameter vector β^0=(β^0_1, ⋯, β^0_p)^T ∈ℝ^p is assumed to be sparse with a small number of nonzero coefficients. We denote the index sets of the nonzero and zero coefficients as 𝒮_0={j: β^0_j0} and 𝒮_0^c={j: β^0_j=0}, respectively. We denote the cardinality of the active set as s_0 = |𝒮_0| and assume it is smaller than p, and the dimension p can be larger than n. §.§ Sparsified Simultaneous Confidence Intervals We propose a new type of simultaneous confidence intervals, namely sparsified simultaneous confidence intervals (SSCI). It consists a sequence of confidence intervals that contains all the true coefficients simultaneously with a confidence level 1-α. That is, let SSCI_1-α = {β∈ℝ^p : β_j ≤β_j ≤β_j, j=1,…,p } where β_j and β_j are the lower and upper bounds of j-th coefficient, such that 𝐏(β^0 ∈SSCI_1-α) = 1-α. The proposed SSCI is sparse in the sense that some of its intervals' upper and/or lower bounds are shrunken to zero, signaling the significance of the corresponding coefficients. To construct SSCI, we bootstrap a two-stage estimator that contains both the consistent model selection procedure and the refitted estimation procedure. Given an observed sample, we first apply the two-stage estimator to select an initial model and obtain the refitted coefficient estimate. We further generate bootstrap samples using the refitted estimate via the celebrated residual bootstrap <cit.>. For each bootstrap sample, we apply the same two-stage estimator to identify the bootstrap model 𝒮̂^(b) and obtain the refitted bootstrap coefficient estimate β̂^(b). After removing the α proportion of bootstrap estimates according to their outlyingness, we use the remaining bootstrap estimates to construct SSCI. The details of this procedure are outlined in Algorithm <ref>. For each bootstrap estimate, we measure its outlyingness relative to other bootstrap estimates, i.e., outlyingness score, by calculating the maximum absolute value of standardized coefficients O^(b)=O(β̂^(b)) = max_j∈{1,…,p}|(β̂^(b)_j - β̅̂̅_j)/SE(β̂_j)|, where β̅̂̅_j=∑^B_b=1β̂^(b)_j/B and SE(β̂_j)=(∑^B_b=1(β̂^(b)_j - β̅̂̅_j)^2/(B-1))^1/2 is the bootstrap standard error estimate. Intuitively, standardized coefficients reveal the variability of each bootstrap coefficient estimates, while the maximum absolute value among all covariates measures the overall outlyingness of the bootstrap estimate and the bootstrap model. Hence, the outlyingness scores identify the extreme and rare bootstrap models and estimates. For example, in one bootstrap iteration, if a two-stage estimator selects a particular covariate that has never been selected in other bootstrap iterations, the outlyingness score of this coefficient may be very large, which implies that this bootstrap estimate may be unusual and should be discarded. In Step 5 of Algorithm <ref>, we remove these outlying bootstrap estimates by cutting the outlying scores at its (1-α)-percentile, and use the remaining bootstrap estimates to construct our intervals. Since the bootstrap estimates are likely sparse, the boundaries of the confidence intervals of SSCI are likely sparse too, which carry important information about the covariates and the model. In particular, we define three groups of covariates: significant covariates whose intervals do not contain zero, denoted as 𝒮̂_1={j: β_j·β_j > 0 }; plausible covariates whose intervals contain zero but has non-zero width, denoted as 𝒮̂_2={j: β_j ·β_j ≤ 0 and β_j ≠β_j }, and unimportant covariates whose intervals contain zero and have zero-width, denoted as 𝒮̂_3={j: β_j = β_j = 0 }. The significant covariates are the ones strongly associated with the response variable and should be included in the final model at 1-α confidence level. The plausible covariates are the ones weakly associated with the response. We do not have enough evidence to prove their significance nor to disqualify them for the final model. The unimportant covariates are the superfluous predictors that should be excluded from the final model at 1-α confidence level. Therefore, not only do we have the estimation uncertainties for each coefficient estimate, but we also obtain model information based on the sparsity feature of the intervals. The proposed algorithm can be coupled with many two-stage estimators that consists of a consistent model selection procedure followed by a consistent refitted regression estimator. For model selection methods, we study Lasso, adaptive Lasso, and <cit.>'s SPSP based on the solution path of adaptive Lasso (SPSP+AdaLasso) or Lasso (SPSP+Lasso). For the refitted estimator β̃, we adopt the least square refitted estimator through out this article, that is, β̃_S̃ = (𝐗_S̃^T 𝐗_S̃)^-1𝐗_S̃^T 𝐘∈ℝ^|S̃| and β̃_S̃^c=0∈ℝ^p-|S̃|. Through numerical studies, among all the methods we compared, we find SPSP is the best model selector for constructing the proposed intervals in terms of stability and coverage probability. Using SPSP, our intervals are noticeably narrower than other classical simultaneous confidence intervals, at the same time offering more accurate information about the true coefficient vector and the true model. Therefore, we recommend <cit.>'s method as the default selection method for our proposed approach. Lastly, the proposed method allows alternative outlyingness scores to be implemented. By defining outlyingness scores differently, we are able to construct different inference tools such as individual confidence intervals, simultaneous confidence intervals, or model confidence set <cit.>. We leave this topic for future work. §.§ Theoretical Properties In this section, we establish the theoretical properties of the proposed method, such as the coverage probability of the SSCI under various model selection methods. Without loss of generality, we assume 𝐗 are standardized with zero mean and unit variance. Let 𝐗_𝒮_0∈ℝ^n × s_0 be the true signal covariates matrix and s_0=|𝒮_0|. Suppose the selection method selects the true model with the probability 1-2e^-h(n), that is 𝐏(𝒮̃=𝒮_0)=1-2e^-h(n). Then, with B=o(e^h(n)), for any confidence level of α∈ (0,1), the _1-α constructed in Algorithm <ref> has the asymptotic coverage probability 𝐏(β^0 ∈_1-α) 1-α. Remark 1. Theorem <ref> implies that, with a consistent selection procedure, the proposed SSCI can asymptotically achieve the nominal coverage probability. The function h(n) has different forms when different selection method is adopted. We have h(n)=n^c with 0≤ c <1 for Lasso, and h(n)=γ n for Adaptive Lasso and SPSP. We further discuss the induced properties from Theorem <ref> when we incorporate Lasso, adaptive Lasso, and SPSP in our approach as shown in Corollaries <ref>, <ref> and <ref>. Let 𝒮̃^(λ_n) be the model selected by the Lasso with the tuning parameter λ_n and ^_1-α (λ_n) be constructed by Algorithm <ref> using the Lasso with the tuning parameter λ_n and the least square refitted estimate. Under the strong irrepresentable condition <cit.>, 𝐏(𝒮̃^(λ_n)=𝒮_0) ≥ 1-2e^-n^c if λ_n/n → 0 and λ_n/n^(1+c)/2→∞ with 0≤ c < 1, and B=o(e^n^c) , we have 𝐏(β^0 ∈^_1-α(λ_n)) 1-α. Let 𝒮̃^(λ_n) be the model selected by the adaptive Lasso with the tuning parameter λ_n and ^_1-α (λ_n) be constructed by Algorithm <ref> using the adaptive Lasso with the tuning parameter λ_n and the least square refitted estimate. Under the restricted eigenvalue condition <cit.>, 𝐏(𝒮̃^(λ_n)=𝒮_0)≥ 1-2e^-γ n if λ_n=4σ√(2γ+ 2log p/n ) and γ→ 0. Further if B=o(e^γ n), we have 𝐏(β^0 ∈^_1-α(λ_n)) 1-α. Let 𝒮̃^ be the model selected by SPSP and ^_1-α be constructed by Algorithm <ref> using the SPSP and the least square refitted estimate. Under the compatibility condition in <cit.>, and the weak identifiability condition in <cit.>, the SPSP can select the true model 𝒮_0 over λ∈ [4σ√(4γ + 2log p/n),+∞] with probability at least 1-2e^-γ n, that is 𝐏(𝒮̃^=𝒮_0)≥ (1-2e^-γ n). Further if B=o(e^γ n), we have 𝐏 (β^0 ∈^_1-α ) 1-α. §.§ Visualization and Comparison with Other Methods In this section, we develop an intuitive graphical tool, namely the SSCI plot, to visualize the estimation uncertainty. Specifically, we plot the confidence intervals of all covariates side by side but rearrange them in the following order: We place the significant covariates with all positive (or all negative) interval boundaries on the left (or right) end of the horizontal axis. The plausible and unimportant covariates are placed in the middle with grey and blue shades. The bootstrap estimates contained in SSCI are drawn in a light blue line while the true coefficient (if known) is in red. We present a simple example using the simulated data from Example 4 in Section <ref>. We construct SSCI using SPSP+AdaLasso, SPSP+Lasso, AdaLasso, and Lasso and visualize them in Figure <ref>. For the sake of simplicity, we only display the total number of covariates instead of the variables' names. In the figure, the estimation uncertainty can be reflected by the vertical widths of the intervals. The significant covariates are labeled in red to highlight their importance in the model. For the plausible covariates, their confidence intervals contain zero; hence we put these covariates on the “waiting list”. The confidence intervals of unimportant covariates all shrunk to zero, implying they should be excluded from the model. It is worth noting that the SSCIs by SPSP perform better than the others in this example. For instance, the plausible covariates of SSCI by SPSP are much fewer than the rest, indicating the stability of SPSP. This is because the bootstrap models of AdaLasso and Lasso fail to reach any consensus. In addition, the vertical widths of SSCIs by SPSP are, on average narrower than the other SSCIs, indicating a lower estimation uncertainty. We summarize their vertical widths in Table <ref>. Lastly, we graphically compare SSCI with the simultaneous confidence region (SCR) and the simultaneous confidence intervals based on debiased Lasso (SCI debiased Lasso), all of which capture the true coefficients at the 1-α confidence level. The SCR <cit.> is defined as SCR_1-α={β∈ℝ^p: ‖β-β̂‖≤ n^-1/2t̂_1-α}, where β̂ is the Lasso estimate, ‖·‖ is ℓ_2 norm, and t̂_1-α is the 1-α quantile of the ℓ_2 norms of the centered and scaled bootstrap estimates. Therefore, the shape of SCR is an ellipsoid centered at β̂ capturing 1-α proportion of the bootstrap estimates, as shown in Figure <ref>. The data is simulated with n=200, p=3, (β_1, β_2, β_3)=(3, 2, 0), σ_ε=3.5. The SCI debiased Lasso <cit.> is defined as SCI^_1-α= {β∈ℝ^p: √(n) | β - β̂^ | ≤ c^*_1-α1} where β̂^ is the debiased Lasso estimate, and c^*_1-α is 1-α bootstrap critical value in <cit.>, and 1=(1,...,1)^T. Therefore, the shape of SCI debiased Lasso is a hypercube centered at β̂^, as shown in Figure <ref>, using the same data. In contrast, our SSCI is a special type of hyperrectangle in ℝ^p with some of its edges shrunken to zero, representing the unimportant covariates. An example is shown in Figures <ref> and <ref> using the same data. The bootstrapped SPSP estimates are shown in dots. The SSCI is the black hyperrectangle with its third dimension, β_3, shrunken to zero. Even though all three types of confidence sets claim to capture the true coefficients at the 1-α confidence level, they are quite different in terms of usage. SCR often offers the tightest confidence set than SCI debiased Lasso since the ellipsoid is able to capture more bootstrap estimates than the hypercube of the same volume. SCI debiased Lasso is easy to interpret since it consists of p individual intervals, whereas SCR cannot be expressed in the same fashion since the interval boundary for one coefficient depends on the rest of the coefficients. Unfortunately, neither case can offer us model information. On the other hand, SSCI is often one of the tightest due to the stability of SPSP, and it comes with easy interpretability. The shrunken edges of SSCI indicate the exclusion of the corresponding covariates in the final model. Such a message is not available in neither SCR nor SCI debiased Lasso. §.§ Model Insights Since the refitted parameter estimate β̃ is obtained after the model 𝒮̃ is selected, the overall estimation uncertainty of β̃, measured by SSCI, can be decomposed as (β̃)=𝔼((β̃|𝒮̃))+(𝔼(β̃|𝒮̃)). We refer to the first term on the right as the parameter uncertainty, representing the averaged parameter estimation uncertainty conditional on selected models. We refer to the second term as the model uncertainty, which represents the extra parameter estimation uncertainty owing to model selection. With the consistent model selector and least square estimator, the parameter uncertainty converges to 0 at the order 1/n. Since the probability of selecting the true model is typical 1-2 exp(-Cn), the model uncertainty converges to 0 much faster. Hence, asymptotically, the overall estimation uncertainty is mostly the parameter uncertainty. On the other hand, under the finite sample size, both the model uncertainty and parameter uncertainty are non-negligible. In this case, we can still utilize SSCI to provide insights about models, admitting that the asymptotic results are not directly applicable and the finite sample theoretical results are hard to derive. Based on the graphical tools introduced previously, we can visualize the parameter uncertainty as well as the model uncertainty. Using the simulated data under the setting in Section <ref>, we first generate the SSCI plot in Figure <ref>. There are, in total, three unique bootstrap models for this data set, so we plot the simultaneous confidence intervals based on each bootstrap model in the shade in the first three panels of Figure <ref> and overlay them in the last panel. We can see that the simultaneous confidence intervals based on each bootstrap model are narrower than SSCI. The difference represents the increased estimation uncertainty from conditional on one selected model to marginalizing over all selected models. Meanwhile, the estimation uncertainty is the most inflated for the coefficients of X_6 and X_7 because these covariates are selected differently in the bootstrap models. Under the finite sample size, SSCI naturally defines two nested models: 1) the lower bound model with the significant covariates, denoted as 𝒮=𝒮̂_1; 2) the upper bound model with the significant and plausible covariates, denoted as 𝒮 = 𝒮̂_1∪𝒮̂_2. It is straightforward to show that the two models trap the true model 𝒮_0 with at least the same confidence level. We call this pair of models model confidence bounds (MCB) induced by SSCI. Conceptually, our MCB extends the idea of the classical confidence interval for a population parameter to the case of model selection. The lower bound model is regarded as the most parsimonious model that cannot afford to lose one more covariate. In contrast, the upper bound model is viewed as the most complex model that cannot tolerate one extra covariate. We can define the width of MCB as w=|𝒮∖𝒮| = |𝒮̂_2|, i.e., the number of plausible covariates. Similar to the width of the classical confidence interval, the MCB width can be potentially used as a measure of model selection stability. Since MCB can be coupled with various selection methods, we can compare their MCB widths at the same confidence level. As an example, we construct the MCBs using SPSP+AdaLasso, SPSP+Lasso, AdaLasso, and Lasso at different confidence levels using the simulated data from Example 4 in Section <ref>. Figure <ref> shows how these MCBs behave as the confidence level changes. The horizontal axis represents the covariate, whereas the vertical axis represents the confidence level. At each confidence level, the lower bound model (or the significant covariates) is the red area. The plausible covariates are the grey areas, which are also the MCB width. The upper bound model consists of both red and grey areas. The unimportant covariates are the blue area. As we can see, the MCB width increases as the confidence level increases, which is consistent with the traditional confidence intervals. Among these MCBs, the MCBs by SPSP are able to maintain small widths throughout the confidence levels, whereas the MCBs by AdaLasso and Lasso all have large widths. This evidence implies that the SPSP-based method selects the model more stably and results in fewer unique bootstrap models than Lasso and AdaLasso. More discussion of the SPSP method can be found in Section <ref>. § SELECTION BY PARTITIONING SOLUTION PATHS Our SSCI can be equipped with different variable selection and estimation methods. A more accurate and stable method will help to construct narrower intervals as well as more zero-width intervals and provide an informative MCB. Throughout this article, we rely on SPSP <cit.> due to its stability and accuracy. The idea of SPSP is to partition the covariates into “relevant” and “irrelevant” sets by utilizing the entire solution paths. It mainly has three advantages. First, SPSP is more stable and accurate (low false positive and false negative rates) than other variable selection approaches. As a result, it will generate fewer unique models among bootstrap samples. Second, SPSP is computationally more efficient because it does not need to calculate the solutions multiple times for all tuning parameters when cross-validation is involved. When the bootstrap technique is applied, this advantage may dramatically decrease the computing time. Third, one can flexibly incorporate SPSP with the solution paths of Lasso, adaptive Lasso, and other penalized estimators. Here, we briefly introduce the SPSP approach. More in-depth discussion can be found in <cit.>. Given a tuning parameter λ_k and the corresponding coefficients estimation β̂_1, …, β̂_p, it orders the coefficients estimation as β̂_(1), …, β̂_(p). Based on the order statistics, it defines the gap between a relevant set (𝒮̂_k) and an irrelevant set (𝒮̂^c_k) as the adjacent distance between two order statistics, β̂_(p-ŝ_k) and β̂_(p-ŝ_k+1). This gap (adjacent distance) is denoted as D(𝒮̂_k, 𝒮̂^c_k)=D^(k)_p-ŝ_k+1= β̂^(k)_(p-ŝ_k+1) - β̂^(k)_(p-ŝ_k). With this notation, the largest adjacent distance in 𝒮̂_k can be defined as D_max(𝒮̂_k)=max{D^(k)_j: j>p-ŝ_k + 1}. Likewise, D_max(𝒮̂^c_k)=max{D^(k)_j: j<p-ŝ_k + 1} is the largest adjacent distance in 𝒮̂^c_k. Then the final step of the SPSP can be implemented by finding these two sets such that the gap is greater than the largest adjacent distance in irrelevant set (𝒮̂_k), and smaller than the largest adjacent distance in the relevant set (𝒮̂^c_k). The SPSP adaptively finds a large enough distance, D(𝒮̂_k, 𝒮̂^c_k), which satisfies D_max(𝒮̂_k)/D(𝒮̂_k, 𝒮̂^c_k)≤ R < D(𝒮̂_k, 𝒮̂^c_k)/D_max(𝒮̂^c_k), where the constant control value R can be estimated from the data. Finally, the SPSP method identifies a set of relevant variables 𝒮̃^SPSP as the union of all 𝒮̂_k for the tuning parameters λ_1 < λ_2 < ⋯ < λ_k, i.e 𝒮̃^SPSP=∪^K_k=1𝒮̂_k, as the estimate of the true model 𝒮_0. Afterwards, we refit the model to obtain the least square estimation β̂^LS_𝒮̃^SPSP for the selected covariates 𝒮̃^SPSP but keep zero coefficients for unselected covariates. The estimation of coefficients for SPSP are β̂^SPSP=(β̂^LS_𝒮̃^SPSP,0_𝒮̂^c). § SIMULATION STUDIES We investigate the performance of our SSCI in the high-dimensional settings with independent or correlated covariates and low-dimensional settings with weak signals. Under each setting, we construct SSCIs using four variable selection methods: SPSP based on the solution paths of adaptive Lasso (SPSP+AdaLasso) and Lasso (SPSP+Lasso), adaptive Lasso with 10-fold cross-validation (AdaLasso+CV), and Lasso with 10-fold cross-validation (Lasso+CV). We also construct SCI debiased Lasso. Lastly, we construct the simultaneous confidence intervals using the OLS estimates under the true model as a benchmark for estimation uncertainty (denoted as “Oracle”). We set the confidence level as 95%. To obtain these results, we develop package to construct and visualize the SSCI based on the and packages <cit.>. Below is a list of settings. Study 1: We generate 200 data sets from the linear model with 𝐗_j∼ N(0, 1), ε_i ∼ N(0, 1), for i = 1,…,n and j = 1,…, p, and set B=1000. Example 1: (Independent covariates) Let n = 200, p = 300, and β^0=(4, 3.5, 3, 2.5, 2, 0,...,0). Example 2: (Correlated covariates) Let n=50, p=100, and β^0=(3, 2, 1.5,0,...,0). The pairwise covariate correlation is cor(𝐗_j, 𝐗_j')=0.5^|j - j'|. Example 3: The setting is same as the Example 2 except the n = 200 and p = 300. Example 4: (Correlated covariates with coefficients of alternating signs) Let n = 200, p = 300, β^0=(0.9,-0.85,0.93,-1, 0.8, -0.85, 0.88, 0,...,0), cor(𝐗_j, 𝐗_j')=0.5^|j - j'|. We compare different methods in the following aspects: coverage probability of the simultaneous confidence intervals P^SCI_coverage, average interval width of true signals w̅_𝒮_0 and non-signals w̅_𝒮_0^c, MCB coverage probability P^MCB_coverage, average MCB width w̅, and the coverage rate of the individual confidence interval for the weak signal P^θ_coverage. Results are shown in Table <ref> with the standard errors reported in parentheses. Under high-dimensional settings, all SSCIs by SPSP maintain valid coverage probabilities. Their interval widths, on average, are narrower than SCI debiased Lasso in most of the cases. In particular, the interval widths of the non-signals are close to zero because of their sparsity. In terms of the inference of the true model, the SPSP-based MCBs all maintain valid coverage probabilities. Their widths are also smaller than the rest. Lasso with cross-validation often provides unsatisfactory coverage rates. This is mostly because of its poor selection performance as opposed to the SSCI procedure. For example, under the challenging setting of Example 4, Lasso frequently over-selects redundant variables hence inducing bias in selection, leading to poor coverage rates of its confidence intervals. Our approach can also work well when the weak signal covariates are involved. We follow <cit.> and include a study to show the advantages of our SSCIs for the weak signal situation. Study 2: We simulate 400 data sets from linear model with n = 100 and p = 20. Let 𝐗_j∼ N(0, 1) and ε_i ∼ N(0, 2^2) for i = 1,…,n and j = 1,…, p. Let β^0=(1,1,0.5,θ,0,...,0) with a weak signal of θ=0.3. The pairwise covariate correlation is cor(𝐗_j, 𝐗_j')=ρ^|j - j'|. Results are shown in Table <ref>. Example 5: (Independent covariates) ρ=0. Example 6: (Weakly correlated covariates) ρ=0.2. Example 7: (Moderately correlated covariates) ρ=0.5. Under the weak signal setting, the advantages of our SPSP-based SSCIs persist. Although the interval widths are inflated compared to Study 1, our SSCIs are still narrower than SCI debiased Lasso for both true and non-signals. Besides, the MCBs are still tight and can achieve nominal coverage probability. Lastly, the individual confidence interval for the weak signal all have valid coverage probabilities. § REAL DATA EXAMPLES In this section, we apply the proposed approach to investigate biology's critical genome-wide transcriptional regulation problem. Specifically, biologists are interested in identifying a few crucial transcription factors (TFs) that are associated with the gene expression levels during the yeast cell cycle process. The response variable of this data is the n=1132 gene expression levels of yeast in the study <cit.> and <cit.>. The covariates include 96 transcription factors (TFs) measured by binding probabilities using a mixture model based on the ChiP data <cit.> (standardized to have zero mean and unit variance) and their interaction effects. The dataset is publicly available in the R package <cit.>. Previous studies have focused on either the individual TF effects or the synergistic effects where a pair of TFs cooperate to regulate transcription in the cycle process <cit.>. For example, <cit.> and <cit.> identify 31 and 18 cooperative TF pairs, respectively. However, to the best of our knowledge, there is no simultaneous inference of both the individual TFs and cooperative TF pairs. We attempt to conduct simultaneous inference to investigate this issue. In particular, we pre-screen all the individual TFs and 4560=96*95/2 TF interactions using sure independence screening <cit.> and identify 1200 covariates correlated with the response variable. We then add the individual TF back even if the screening does not select them to avoid missing any vital TFs. We obtain 1263 covariates, including the time t, for our inference. We construct SSCI using the recommended SPSP + AdaLasso with B=5000. We present the results of 95% SSCI in Figure <ref>. The upper and lower bounds of the confidence intervals are given in Table 3 of the supplementary materials. Among the 1263 covariates, only two covariates, MBP1 and time, are identified as significant covariates, deemed important for regulating the transcription process. Another 37 covariates are identified as plausible covariates, whose relevance to the gene expression levels requires further studies. The rest of the covariates are identified as unimportant. All covariates are displayed in Figure <ref> using different shaded areas. Our results decrease the size of the candidate pool of biologically relevant TFs and synergistic interactions. Comparing our results with the literature, all of our identified individual TFs, MBP1, FKH2, NDD1, and SWI6, are experimentally verified <cit.>. Besides, 12 of our identified cooperative TF pairs have been documented in the literature (MBP1-SWI6, GAT3-PDR1, GAT3-MSN4, MSN4-YAP5, MSN4-PDR1, SWI4-SWI6, HAP4-PDR1, GAL4-RGM1, HIR1-HIR2, MCM1-NDD1, FKH2-NDD1, FKH2-MCM1). The remaining 22 plausible TF pairs have either one or both TFs selected in the literature <cit.> except the PDR1-RAP1 pair. We present some representative covariates from our SSCI in comparison to the literature in Table <ref>. A detailed comparison of all plausible covariates is given in Table 4 of the supplementary material. For these plausible covariates, one advantage of SSCI is that it reports the possible sign of their effects if one of their boundaries is at zero, e.g., [0,1] or [-1,0]. We report the estimated signs of TFs as “plausibly +/-” in Table <ref> and Table 4 of the supplementary materials and further compare the estimated signs with the documented regulatory effects in the literature. Most identified TFs and TF pairs (MBP1, FKH2, NDD1, SWI6, MBP1-SWI6, MSN4-YAP5, MSN4-PDR1, SWI4-SWI6, and HAP4-PDR1) are consistent with the literature <cit.>. § CONCLUSION This article proposes a sparse version of the simultaneous confidence intervals for high-dimensional linear regression. The proposed confidence intervals reflect rich information about the parameter and the model. Our approach has been theoretically and empirically justified, with desirable asymptotic properties and satisfactory numerical performance. There are many potential directions for future research. For example, we observe the varying performance of inference based on different variable selection methods. An interesting research topic would be to examine how applying different selection methods to the same data set might yield different results. Meanwhile, the weak signals pose challenges to many selection methods. Although we have tested our method numerically, extending our theoretical results to the weak signal case is an important topic <cit.>. In addition, our approach may be easily extended to build simultaneous confidence intervals for a subset of covariates, which is of great value in many real problems <cit.>. SUPPLEMENTARY MATERIAL Supplementary file: The supplementary file (pdf) contains the algorithm of the residual bootstrap, details of two real data examples, and technical proofs. We demonstrate the SSCI on the low-dimensional Boston housing data. In addition, we include more details of the high-dimensional Yeast cell-cycle (G1) data analysis in the paper. R-package for sparsified simultaneous confidence intervals: We develop the R-package to construct and visualize the sparsified simultaneous confidence intervals described in the article. This package implements the SSCI method proposed in this paper and supports building the SSCI using all selection approaches adopted in simulation studies. (R package binary file) On behalf of all authors, the corresponding author states that there is no conflict of interest. § §.§.§ penbib@code@ apalike
http://arxiv.org/abs/2307.04966v1
20230711015827
Wasserstein Distributionally Robust Regret-Optimal Control under Partial Observability
[ "Joudi Hajar", "Taylan Kargin", "Babak Hassibi" ]
math.OC
[ "math.OC" ]
Wasserstein Distributionally Robust Regret-Optimal Control under Partial Observability The authors are affiliated with the Department of Electrical Engineering at Caltech. Emails: {jhajar,tkargin,hassibi}@caltech.edu. Joudi Hajar Taylan Kargin Babak Hassibi August 12, 2023 =========================================================================================================================================================================================================================== plain plain This paper presents a framework for Wasserstein distributionally robust (DR) regret-optimal (RO) control in the context of partially observable systems. DR-RO control considers the regret in LQR cost between a causal and non-causal controller and aims to minimize the worst-case regret over all disturbances whose probability distribution is within a certain Wasserstein-2 ball of a nominal distribution. Our work builds upon the full-information DR-RO problem that was introduced and solved in Yan et al., 2023 <cit.>, and extends it to handle partial observability and measurement-feedback (MF). We solve the finite horizon partially observable DR-RO and show that it reduces to a tractable semi-definite program whose size is proportional to the time horizon. Through simulations, the effectiveness and performance of the framework are demonstrated, showcasing its practical relevance to real-world control systems. The proposed approach enables robust control decisions, enhances system performance in uncertain and partially observable environments, and provides resilience against measurement noise and model discrepancies. regret-optimal control, Wasserstein distance, partial observability, distributionally robust control § INTRODUCTION Regret-optimal control <cit.>, is a new approach in control theory that focuses on minimizing the regret associated with control actions in uncertain systems. The regret measures the cumulative difference between the performance achieved by a causal control policy and the performance achieved by an optimal policy that could have been chosen in hindsight. In regret-optimal control, the worst-case regret over all ℓ_2-norm-bounded disturbance sequences is minimized. Distributionally robust control <cit.>, on the other hand, addresses uncertainty in system dynamics and disturbances by considering a set of plausible probability distributions rather than relying on a single distribution as in LQG control, or on a worst-case disturbance, such as in H_∞ or RO control. This approach seeks to find control policies that perform well across all possible distributions within the uncertainty set, thereby providing robustness against model uncertainties and ensuring system performance in various scenarios. The size of the uncertainty set allows one to control the amount of desired robustness so that, unlike H_∞ controllers, say, the controller is not overly conservative. The uncertainty set is most often taken to be the set of disturbances whose distributions are within a given Wasserstein-2 distance of the nominal disturbance distribution. The reason is that, for quadratic costs, the supremum of the expected cost over a Wasserstein ball reduces to a tractable semi-definite program (SDP). The current paper considers and extends the framework introduced in <cit.> that applied distributionally robust (DR) control to the regret-optimal (RO) setting. In the full-information finite-horizon setting, the authors of <cit.> reduce the DR-RO problem to a tractable SDP. In this paper, we extend the results of <cit.> to partially observable systems where, unlike the full-information setting, the controller does not have access to the system state. Instead, it only has access to partial information obtained through noisy measurements. This is often called the measurement feedback (MF) problem. Of course, the solution to the measurement feedback problem in LQG and H_∞ control is classical. The measurement-feedback setting for DR control has been studied in  <cit.>, <cit.>, and for RO control in  <cit.>. In the finite-horizon case, we reduce the DR-RO control problem with measurement feedback to an SDP similar to the full-information case studied in <cit.>. Furthermore, we validate the effectiveness and performance of our approach through simulations, showcasing its applicability in real-world control systems. The organization of the paper is as follows. In section <ref>, we review the LQG and regret optimal control formulation in the measurement-feedback setting. In section <ref>, we present the distributionally robust regret-optimal with measurement feedback (DR-RO-MF) problem formulation, in section <ref> we reformulate the problem as a tractable SDP, and in section <ref> we show numerical results for controlling the flight of a Boeing 747 <cit.>. § PRELIMINARIES §.§ Notations ℝ denotes the set of real numbers, ℕ is the set of natural numbers, · is the 2-norm, 𝔼_(·) is the expectation over (·), ℳ(·) is the set of probability distributions over (·) and Tr denotes the trace. §.§ A Linear Dynamical System We consider the following state-space model of a discrete-time, linear time-invariant (LTI) dynamical system: x_t+1 =Ax_t+Bu_t+w_t, y_t =Cx_t+v_t. Here, x_t∈ℝ^n represents the state of the system, u_t∈ℝ^m is the control input, w_t ∈ℝ^n is the process noise, while y_t ∈ℝ^p represents the noisy state measurements that the controller has access to, and v_t ∈ℝ^p is the measurement noise. The sequences {w_i} and {v_i} are considered to be randomly distributed according to an unknown joint probability measure P which lies in a specified compact ambiguity set, P. For simplicity, we take x_0 to be zero. In the rest of this paper, we adopt an operator form representation of the system dynamics (<ref>). To this end, assume a horizon of N∈ℕ, and let us define x [ [ x_0; x_1; ⋮; x_N-1 ]] ∈ℝ^Nn   ,    u [ [ u_0; u_1; ⋮; u_N-1 ]] ∈ℝ^Nm and similarly for y∈ℝ^Np, w∈ℝ^Nn, and v∈ℝ^Np. Using these definitions, we can represent the system dynamics (<ref>) equivalently in operator form as x =Fu+Gw, y =Ju+Lw+v, where F∈ℝ^Nn× Nm, G∈ℝ^Nn× Nn, J∈ℝ^Np× Nm, and L∈ℝ^Np× Nn are strictly causal time-invariant operators (i.e, strictly lower triangular block Toeplitz matrices) corresponding to the dynamics (<ref>). We consider the Linear-Quadratic Gaussian (LQG) cost given as J(u,w,v) x^TQx+u^TRu where Q, R≻0 are positive definite matrices of the appropriate dimensions. In order to simplify the notation, we redefine x and u as x← Q^1/2x, and u← R^1/2u, so that (<ref>) becomes J(u,w,v)=x^2+u^2. §.§ Controller Design We consider a linear controller that has only access to the measurements: u=Ky, K∈𝒦, where 𝒦⊆ℝ^Nm× Np is the space of causal (i.e., lower triangular) matrices. Then, the closed-loop state measurement becomes y=(I-JK)^-1(Lw+v). As in <cit.>, let E=K(I-JK)^-1, be the Youla parametrization, so that K=(I+EJ)^-1E. The closed-loop LQG cost (<ref>) can then be written as: J(K,w,v)= [ w^T v^T ] T_K^T T_K [ w; v ], where T_K is the transfer operator associated with K that maps the disturbance sequences [ w; v ] to the state and control sequences [ x; u ]: T_K[ FEL+G FE; EL E ]. §.§ Regret-Optimal Control with Measurement-Feedback Given a noncausal controller K_0 ∈ K, we define the regret as: R(K,w,v) J(K,w,v)- J(K_0,w,v), = [ w^T v^T ] (T_K^T T_K-T_K_0^T T_K_0)[ w; v; ], which measures the excess cost that a causal controller suffers by not knowing the future. In other terms, regret is the difference between the cost accumulated by a causal controller and the cost accumulated by a benchmark noncausal controller that knows the complete disturbance trajectory. The problem of minimizing regret in the measurement-feedback setting is referred to as (RO-MF) and is formulated as: inf_K∈𝒦sup_ w,vR(K,w,v)/w^2+ v^2, which is solved suboptimally by reducing it to a level-1 suboptimal Nehari problem <cit.>. § DISTRIBUTIONALLY ROBUST REGRET-OPTIMAL CONTROL In this section, we introduce the distributionally robust regret-optimal (DR-RO) control problem with measurement feedback, which we refer to as DR-RO-MF. In this setting, the objective is to find a controller K ∈𝒦 that minimizes the maximum expected regret among all joint probability distributions of the disturbances in an ambiguity set P. This can be formulated formally as inf_K∈𝒦sup_P∈𝒫𝔼_P [R(K,w,v)], where the disturbances [ w; v ] are distributed according to P∈ P. To solve this problem, we first need to characterize the ambiguity set 𝒫 and explicitly determine a benchmark noncausal controller K_0. As in <cit.>, we choose 𝒫 to be the set of probability distributions that are at a distance of at most r>0 to a nominal probability distribution, P_0∈ℳ(ℝ^N(n+p)). Here, the distance is chosen to be the type-2 Wasserstein distance defined as <cit.>: W_2^2(P_1,P_2):=inf_π∈Π(P_1,P_2) ∫_ℝ^n×ℝ^nz_1-z_2 ^2 π(dz_1,dz_2) , where the set Π(P_1,P_2) comprises all joint distributions that have marginal distributions P_1 and P_2. Then, 𝒫 can be written as: 𝒫 := {P ∈ℳ(ℝ^N(n+p)) | W_2(P_0, P)≤ r}. Unlike the full-information case, we know from Theorem 1 in <cit.> that in the measurement feedback case, there is no optimal noncausal controller that dominates every other controller for every disturbance. Therefore, we will choose K_0 as the optimal noncausal controller that minimizes the Frobenius norm of T_K. Theorem 3 in <cit.> shows that such a controller can be found as: K_0=(I+E_0J)^-1 E_0, where the associated operator, T_K_0 is: T_K_0=[ FE_0L+G FE_0; E_0L E_0 ], with E_0 -T^-1F^TGL^TU^-1, T I+F^TF , U I+LL^T . § TRACTABLE FORMULATION In this section, we introduce a tractable reformulation of the DR-RO-MF control problem (<ref>). §.§ DR-RO-MF Control Problem Defining 𝒞_K T_K^T T_K-T_K_0^T T_K_0, we can rewrite the DR-RO-MF control problem (<ref>) as inf_K∈𝒦sup_P∈𝒫𝔼_P [ [ w^T v^T ] C_K[ w; v; ]]. The following theorem gives the dual problem of inner maximization and characterizes the worst-case distribution. [adapted from Theorems 2 and 3 in <cit.>]. Suppose P_0 is absolutely continuous with respect to the Lebesgue measure on ℝ^N and [ w_0; v_0 ]∼ P_0. The optimization problem: sup_P∈𝒫𝔼_P[ [ w^T v^T ] C_K[ w; v; ]] where [ w; v ]∼ P and 𝒞_K∈𝕊^N(n+p), with λ_max(𝒞_K)≠ 0, has a finite solution and is equivalent to the convex optimization problem: inf_γ≥ 0, γ I ≻𝒞_Kγ (r^2-Tr(M_0)) + γ^2 Tr(M_0(γ I-𝒞_K)^-1), where M_0:=𝔼_P_0[[ w; v ][ w^T v^T ]]. Furthermore, the disturbance that achieves the worst-case regret is [ w^∗; v^∗ ]∼ P^∗, where [ w^∗; v^∗ ] = γ^∗ (γ^∗ I - 𝒞_K)^-1[ w_0; v_0 ], and γ^∗ is the optimal solution of (<ref>), which also satisfies the algebraic equation: Tr( (γ(γ I - 𝒞_K)^-1-I)^2M_0)=r^2 The proof follows from Theorems 2 and 3 in <cit.> and is omitted for brevity here. We highlight two remarks pertaining to the presented theorem. Remark 1: Notice that the supremum of the quadratic cost depends on P_0 only though its covariance matrix M_0. Note further that as r→∞, the optimal γ reaches its smallest possible value (since r^2 multiplies γ in (<ref>)). The smallest possible value that γ can take is simply the operator norm of C_K, which means that the DR-RO-MF controller approaches the regret-optimal controller as r→∞. Remark 2: Notice that the worst-case disturbance takes on a Gaussian distribution when the nominal disturbance is Gaussian. This is not immediately evident as the ambiguity set 𝒫 contains non-Gaussian distributions. Note further that the worst-case disturbance is correlated even if the nominal distribution has white noise. Assuming the covariance of the nominal distribution to be M_0=𝔼_P_0[[ w; v ][ w^T v^T ]]=I. so that Tr(M_0)=N(n+p), the optimization problem (<ref>) can be cast equivalently using Theorem <ref> as inf_K∈𝒦inf_γ≥ 0γ (r^2-N(n+p)) + γ^2 Tr((γ I - 𝒞_K)^-1) s.t. γ I ≻𝒞_K 𝒞_K=T_K^T T_K -T_K_0^T T_K_0 As in <cit.>, define the unitary matrices Ψ and Θ: Θ=[ S^-1/2 0; 0 T^-T/2 ][ I -F; F^T I ] Ψ=[ I L^T; -L I ][ V^-1/2 -0; 0 U^-T/2 ] where T and U are as in (<ref>) and (<ref>), and S=I+FF^T V=I+L^TL. and S^1/2, T^1/2, U^1/2, and V^1/2 are (block) lower triangular matrices, such that S=S^1/2S^T/2, T=T^T/2T^1/2, U=U^1/2U^T/2, V=V^T/2V^1/2. Then, the optimization problem (<ref>) is equivalent to: inf_K∈𝒦, γ≥ 0, γ I ≻𝒞_Kγ (r^2-N(n+p)) + γ^2 Tr((γ I - 𝒞_K )^-1) s.t. 𝒞_K=(Θ T_K Ψ)^T Θ T_K Ψ-(Θ T_K_0Ψ)^T Θ T_K_0Ψ which holds true since trace is invariant under unitary Θ and Ψ. By introducing an auxiliary variable X≽γ^2 (γ I - 𝒞_K)^-1 and leveraging the Schur complement theorem as in <cit.>, the problem (<ref>) can be recast as inf_K∈𝒦, γ≥ 0, X ≽ 0γ (r^2-N(n+p)) + Tr(X) s.t. [ X γ I; γ I γ I - 𝒞_K ]≽ 0 γ I - 𝒞_K ≻ 0 𝒞_K=(Θ T_K Ψ)^T Θ T_K Ψ-(Θ T_K_0Ψ)^T Θ T_K_0Ψ In the following lemma, we establish some of the important identities that are utilized to convert problem (<ref>) to a tractable convex program. [adapted from <cit.>]. The following statements hold: * . γ I - 𝒞_K =[ γ I -PZ; -Z^T P^T γ I -Z^TZ ] where Z =T^1/2EU^1/2-W W =-T^-T/2F^TGL^TU^-T/2 P =V^-T/2G^TFT^-1/2 and E, T, U and V are as defined in <ref>, <ref>,  <ref> and <ref> respectively. * . γ I - 𝒞_K ≻ 0 ⇔ Y - W_-,γ_2≤ 1 where γ^-1 I+ γ^-2 P^TP= M_γ^T M_γ M_γ = (γ^-1 I+ γ^-2 P^TP)^1/2 W_γ =M_γW Y =M_γ T^1/2 EU^1/2 - W_+,γ and W_+,γ and W_-,γ are the causal and strictly anticausal parts of W_γ. Here, M_γ is lower triangular, and positive-definite. * Y is causal iff E is causal, where E can be found as follows: E=T^-1/2M_γ^-1(Y+W_+,γ)U^-1/2 * The condition in (<ref>) is recognized as a level-1 suboptimal Nehari problem that approximates a strictly anticausal matrix W_-,γ by a causal matrix Y. The proof follows from Theorem 4 in <cit.> and is omitted for brevity here. Using Lemma <ref>, problem (<ref>) can be reformulated as a tractable optimization program: inf_Z,Y∈𝒦, γ≥ 0, X ≽ 0γ (r^2-N(n+p)) + Tr(X) s.t. [ X_11 X_12 γ I 0; X_12^T X_22 0 γ I; γ I 0 γ I -PZ; 0 γ I -Z^T P^T γ I -Z^TZ ]≽ 0 Y - W_-,γ_2≤ 1 =inf_Z,Y∈𝒦, γ≥ 0, X ≽ 0γ (r^2-N(n+p)) + Tr(X) s.t. [ X_11 X_12 γ I 0 0; X_12^T X_22 0 γ I 0; γ I 0 γ I -PZ 0; 0 γ I -Z^T P^T γ I Z^T; 0 0 0 Z I ]≽ 0 Y - W_-,γ_2≤ 1 where the last step follows from the Schur complement. Using (<ref>), (<ref>), and H_γ=M_γ^-1W_+,γ-W we establish our main theorem. The distributionally robust regret-optimal control problem in the measurement feedback setting (<ref>) reads: inf_Y∈𝒦, γ≥ 0, X ≽ 0γ (r^2-N(n+p)) + Tr(X) s.t. [ X_11 X_12 γ I 0 0; X_12^T X_22 0 γ I 0; γ I 0 γ I -P(*) 0; 0 γ I -(*)^T P^T γ I (*)^T; 0 0 0 (*) I ]≽ 0 (*)=M_γ^-1Y+H_γ [ I (Y - W_-,γ)^T; Y - W_-,γ I ]≻ 0 The optimal controller K^∗ is then obtained using (<ref>) and (<ref>). §.§ Sub-Optimal Problem For a given value of γ, problem (<ref>) can be simplified into a tractable SDP. In practical implementations, we can solve problem (<ref>) by optimizing the objective function with respect to the variables Y and X while fixing γ, thus transforming the problem into an SDP, which can be solved using standard convex optimization packages. We then iteratively refine the value of γ until it converges to the optimal solution γ^*. This iterative process ensures that we obtain the best possible value for γ that minimizes the objective function in problem (<ref>). §.§ LQG and RO-MF Control Problems as Special Cases Interestingly, LQG and RO control in the measurement feedback setting can be recovered from the DR-RO-MF control by varying the radius r which represents the extent of uncertainty regarding the accuracy of the nominal distribution in the ambiguity set. When r→ 0, the ambiguity set transforms into a singular set comprising solely the nominal distribution. Consequently, the problem simplifies into a stochastic optimal control problem under partial observability: inf_K∈𝒦𝔼_P_0 [J(K,w,v)] As r→∞, the ambiguity set transforms into the set of any disturbance generated adversarially and the optimal γ reaches its smallest possible value which is the operator norm of C_K. This means that the problem reduces to the RO-MF control problem which we discussed in section <ref>. § SIMULATIONS §.§ Flight Control We focus on the problem of controlling the longitudinal flight of a Boeing 747 which pertains to the linearized dynamics of the aircraft, as presented in <cit.>. The linear dynamical system provided describes the aircraft's dynamics during level flight at an altitude of 7.57 miles and a speed of 593 miles per hour, with a discretization interval of 0.1 second. The state variables of the system encompass the aircraft's velocity along the body axis, velocity perpendicular to the body axis, angle between the body axis and the horizontal plane, and angular velocity. The inputs to the system are the elevator angle and thrust. The process noise accounts for variations caused by external wind conditions. The discrete-time state space model is: A= [ 0.9801 0.0003 -0.0980 0.0038; -0.3868 0.9071 0.0471 -0.0008; 0.1591 -0.0015 0.9691 0.0003; -0.0198 0.0958 0.0021 1.000 ] B= [ -0.0001 0.0058; 0.0296 0.0153; 0.0012 -0.0908; 0.0015 0.0008 ], C=[ 1 0 0 0; 0 0 0 1 ]. We conduct all experiments using MATLAB, on a PC with an Intel Core i7-1065G7 processor and 16 GB of RAM. The optimization problems are solved using the CVX package <cit.>. We limit the horizon to N=10. We take the nominal distribution P_0 to be Gaussian with mean μ_0=0 and covariance Σ_0=I, and we investigate various values for the radius r, specifically: r∈{0, 0.2, 0.4, 0.6, 0.8, 1, 1.5, 2, 4, 8, 16, 32, 126}. For each value of r, we solve the sub-optimal problem described in section <ref>, iterating over γ until convergence to γ^*. To assess the performance of the controller, we compute the worst-case disturbance, which lies at a Wasserstein distance r from P_0, as discussed in theorem <ref>. Finally, we compare the regret cost of the DR-RO-MF controller with that of the LQG, H_∞ <cit.>, and RO-MF <cit.> controllers while considering the worst-case disturbance corresponding to the DR-RO-MF controller. The results are shown in Figures <ref> and <ref>. The DR-RO-MF controller achieves the minimum cost under worst-case disturbance conditions for any given value of r. When r is sufficiently small (less than 0.2), the cost of the DR-RO-MF controller closely approximates that of the LQG controller (figure <ref>). Conversely, for sufficiently large values of r (greater than 8), the cost of the DR-RO-MF controller closely matches that of the RO-MF controller (figure <ref>). These observations align with theoretical findings as elaborated in section <ref>. Furthermore, it is worth noting that for large values of r (figure <ref>), the LQG controller yields the poorest results. Conversely, for small values of r (figure <ref>), the LQG controller performs on par with the DR-RO-MF controller, emerging as the best choice, as mentioned earlier. This discrepancy is expected since LQG control accounts only for disturbances drawn from the nominal distribution, assuming uncorrelated noise. On the other hand, RO-MF exhibits inferior performance when r is small (figure <ref>), but gradually becomes the top-performing controller alongside DR-RO-MF as r increases. This behavior arises from the fact that RO-MF is specifically designed for sufficiently large r. Lastly, note that the H_∞ cost lies between the costs of the other controllers, interpolating their respective costs. §.§ Performance Under Adversarially Chosen Distribution For any given causal controller K_c, an adversary can choose the worst-case distribution of disturbances for a fixed r as max_P∈𝒫𝔼_ P R(K_c,w,v) P_c, where R is the regret as in (<ref>). Denoting by K_DR-RO-MF the optimal DR-RO-MF controller and by P_DR-RO-MF the worst-case (adversarial) distribution corresponding to K_DR-RO-MF, we have that 𝔼_ P_c R(K_c,w,v) = max_P∈𝒫𝔼_P R(K_c,w,v), ≥min_K∈𝒦max_P∈𝒫𝔼_P R(K,w,v), = 𝔼_ P_DR-RO-MF R(K_DR-RO-MF,w,v), ≥𝔼_ P_c R(K_DR-RO-MF,w,v), where the first equality follows from (<ref>) and the last inequality is due to the fact that P_DR-RO-MF is the worst-case distribution for K_DR-RO-MF. In other words, DR-RO-MF controller is robust to adversarial changes in distribution as it yields smaller expected regret compared to any other causal controller K_c when the disturbances are sampled from the worst-case distribution P_c corresponding to K_c. The simulation results presented in Subsection <ref> show that DR-RO-MF outperforms RO-MF, H_∞, and LQG (designed assuming disturbances are sampled from P_0) controllers under the worst-case distribution of the DR-RO-MF controller P_DR-RO-MF, i.e 𝔼_ P_DR-RO-MF R(K_c,w,v) ≥𝔼_ P_DR-RO-MF R(K_DR-RO-MF,w,v). This directly implies that the theoretically expected inequality 𝔼_ P_c R(K_c,w,v) ≥𝔼_ P_c R(K_DR-RO-MF,w,v) is validated and positively exceeded following the inequalities (<ref>) and 𝔼_ P_c R(K_c,w,v) ≥𝔼_ P_DR-RO-MF R(K_c,w,v). To further support our claims, we assess the performance of LQG and RO-MF controllers by measuring the relative reduction in expected regret when DR-RO-MF controller is utilized under the worst-case distributions corresponding to LQG and RO-MF controllers, respectively: 𝔼_P_c R(K_c,w,v) - 𝔼_ P_c R(K_DR-RO-MF,w,v)/𝔼_P_c R(K_c,w,v)× 100, where K_c is either LQG or RO-MF controller and P_c is the corresponding worst-case distribution. The results are shown in Table <ref> for r ∈{0.2,1,2,4,16,32}. §.§ Limitations In our scenario with a relatively short planning horizon of N=10, the cost reduction achieved by employing DR-RO-MF control, in comparison to traditional controllers such as LQG and H_∞, is moderate. However, it is anticipated that this reduction would become more pronounced with the utilization of a longer planning horizon. Unfortunately, in our experimental setup, we were restricted to using N=10 due to computational limitations. Solving semi-definite programs involving large matrices is computationally inefficient, necessitating this constraint. In practice, this limitation can be overcome by implementing the controller in a receding horizon fashion, where the controller is updated every x time steps. § CONCLUSION In conclusion, this paper extended the distributionally robust approach to regret-optimal control by incorporating the Wasserstein-2 distance <cit.> to handle cases of limited observability. The proposed DR-RO-MF controller demonstrated superior performance compared to classical controllers such as LQG and H_∞, as well as the RO-MF controller, in simulations of flight control scenarios. The controller exhibits a unique interpolation behavior between LQG and RO-MF, determined by the radius r that quantifies the uncertainty in the accuracy of the nominal distribution. As the time horizon increases, solving the tractable SDP to which the solution reduces, becomes more challenging, highlighting the practical need for a model predictive control approach. Overall, the extended distributionally robust approach presented in this paper holds promise for robust and effective control in systems with limited observability. ./bibliography/IEEEtran
http://arxiv.org/abs/2307.04293v1
20230710010226
Inverse of the Gaussian multiplicative chaos: an integration by parts formula
[ "Tomas Kojar" ]
math.PR
[ "math.PR" ]
[for feedback please contact [email protected]] K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality Assessment Jinbao Wang^1, Member, IEEE, Guoyang Xie^1, Yawen Huang^1, Jiayi Lyu, Feng Zheng, Member, IEEE, Yefeng Zheng, Fellow, IEEE, and Yaochu Jin, Fellow, IEEE Jinbao Wang, Jiaqi Liu and Feng Zheng are with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China (e-mail: [email protected]; [email protected]; [email protected]) Guoyang Xie is with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China and is also with the Department of Computer Science, University of Surrey, Guildford GU2 7YX, United Kingdom (e-mail: [email protected]) Yawen Huang and Yefeng Zheng are with Tencent Jarvis Lab, Shenzhen 518040, China (e-mail: [email protected]; [email protected]). Jiayi Lyu is with the School of Engineering Science, University of Chinese Academy of Sciences, Beijing, China (e-mail: [email protected]) Yaochu Jin is with the Faculty of Technology, Bielefeld University, 33619 Bielefeld, Germany and also with the Department of Computer Science and Engineering, University of Surrey, Guildford GU2 7YX, United Kingdom (e-mail: [email protected]) ^1Contributed Equally. August 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this article, we study the analogue of the integration by parts formula from <cit.> in the context of GMC and its inverse. 0.4pt PART: Introduction § INTRODUCTION This article is an offshoot application that came up in <cit.> while doing the preliminary work for extending the work in <cit.>. In particular, in their work they start with the Gaussian random field H on the circle with covariance H(z)H(z')=-lnz-z', where z, z'∈ℂ have modulus 1. The exponential γ H gives rise to a random measure τ on the unit circle , given by τ(I):=μ_H(I):=∫_Ie^γ H_(x)-γ^2/2H_(x)^2, for Borel subsets I⊂=ℝ/ ℤ=[0,1) and H_ is a suitable regularization. This measure is within the family of Gaussian multiplicative chaos measures (GMC) (for expositions see the lectures <cit.>). So finally, they consider the random homeomorphism h:[0,1)→ [0,1) defined as the normalized measure h(x):=τ[0,x]/τ[0,1], x∈ [0,1), and prove that it gives rise to a Beltrami solution and conformal welding map. The goal is to extend this result to its inverse h^-1 and in turn to the composition h_1^-1∘ h_2 where h_1,h_2 are two independent copies. The motivation for that is of obtaining a parallel point of view of the beautiful work by Sheffield <cit.> of gluing two quantum disks to obtain an SLE loop. We let Q_τ(x):[0,τ([0,1])]→ [0,1] denote the inverse of the measure τ:[0,1]→ [0,τ([0,1])] i.e. Q_τ(τ[0,x])=xτ[0,Q_τ(y)]=y, for x∈ [0,1] and y∈ [0,τ([0,1])]. The existence of the inverse Q follows from the strict monotonicity of the Liouville measure η, which in turn follows from being non-atomic <cit.>. We use the notation Q because the measure τ can be thought of as the "CDF function" for the "density" γ H and thus its inverse τ^-1=Q is the quantile (also using the notation τ^-1 would make the equations less legible later when we start including powers and truncations). We will also view this inverse as a hitting time for the measure τ Q_τ(x)=Q_τ(0,x)=T_x:=inft≥ 0: τ[0,t]≥ x. The inverse homeomorphism map h^-1:[0,1]→ [0,1] is defined as h^-1(x):=Q_τ(xτ([0,1])) x∈ [0,1] Since the inverse of GMC didn't seem to appear in other problems, it was studied very little and so we had to find and build many of its properties. In the article <cit.>, we go over various basic properties of the inverse Q. Our guide for much for this work was trying to transfer the known properties of the GMC measure to its inverse, the Markovian structure for the hitting times of Brownian motion s (such as the Wald's equation and the independent of the increments of hitting times) and then trying to get whatever property was required for the framework set up by <cit.> to go through successfully. This was a situation where a good problem became the roadmap for finding many interesting properties for the inverse of GMC and thus GMC itself. When studying the expected value Q(a), we had trouble getting an exact formula. So in the spirit of <cit.> where they used Malliavin calculus to study the hitting times of processes, we tested using Malliavin calculus to gain better understanding of Q(a). Our guide for applying Malliavin calculus is also the article <cit.> where they applied Malliavin calculus to imaginary GMC. §.§ Acknowledgements We thank I.Binder, Eero Saksman and Antti Kupiainen. We had numerous useful discussions over many years. § MAIN RESULT In <ref>, we study the shifted field X_ζ=U_^r(τ_a+ζ). We will obtain an integration by parts formula for that field using the techniques from <cit.>. Then we will integrate over ζ to obtain relations for the shifted-GMC and the inverse in <ref>. For fixed ψ∈ C_c() where we normalize ∫_ψ(a)=1 and a,L≥ 0, we have the relation ∫_0^∞ψ(a)ητ_a,τ_a+L = L+λ ∫_0^r∧L∫_ζ^∞ψ(η(θ-ζ))∫_ (θ-r)∨0^θ1/θ-t-1/r(t) (θ), and ∫_0^∞ψ(a)τ_a = ∫_0^∞ψ(a) a+λ ∫_0^r∫_0^∞ψ(η(θ))∫_ (θ+ζ-r)∨0^θ+ζ1/θ+ζ-t-1/r(t) _ζ(θ), where _ζ(θ):=e^U(θ+ζ). PART: Integration by parts formula § SETUP FOR MALLIAVIN CALCULUS FOR THE INVERSE In this part we will use the setup from from <cit.> in order to use the integration by parts formula. In particular, for the Gaussian process X_t:=U_ϵ^δ(t) with covariance R(t,s):= {ln(r /ε )-1/-1/rt-s , t-s≤ε ln(r/t-s) +t-s/r-1 , δ>t-s≥. we will use the Malliavin calculus setup for Gaussian processes as developed in <cit.>. Then once we obtain the various integration by parts formulas, we will then take limit in ϵ→ 0 using the convergence results for GMC (eg.<cit.>). For shorthand we will write U̅(t)=:γ U_ϵ(t):=γ U_ϵ(t)-γ^2/2ln1/ϵ. Let be the Hilbert space defined as the closure of the space of step functions on [0,∞) with respect to the scalar product ⟨1_[0,s] ,1_[0,t]|_⟩:=R(t,s). The mapping 1_[0,t]↦ X_t can be extended to an isometry between and the Gaussian space H_1(X) associated with X. We will denote this isometry by ϕ↦ X(ϕ). Let be the set of smooth and cylindrical random variables of the form F=f(X(ϕ_1),...,X(ϕ_n)) for some n≥ 1 and f∈ C^∞_b(ℝ^n) (smooth with bounded partial derivatives) and ϕ_i∈. The derivative operator D of a smooth and cylindrical random variable F∈ is defined as the -valued random variable DF=∑_i=1^n∂ f/∂ x_i (X(ϕ_1),...,X(ϕ_n)) ϕ_i. The derivative operator D is then a closable operator from L^2(Ω) into L^2(Ω;). The Sobolev space ^1,2 is the closure of with respect to the norm F_1,2^2=E(F^2)+E(DF_^2) The divergence operator δ is the adjoint of the derivative operator. We say that a random variable u∈ L^2(Ω;) belongs to the domain of the divergence operator, denoted by Dom (δ), if E⟨DF,u|_⟩≤ c_uF_L^2(Ω) for any F∈. In this case δ(u) is defined by the duality relationship EFδ (u)= E⟨DF,u|_⟩, for any F∈^1,2. §.§ Regularity of the covariance The following are some of the hypotheses used in the development of Malliavin calculus for Gaussian processes <cit.>. The difference is U_ε^δ(t)-U_ε^δ(s)^2 =2t-s/(1-/δ), which is strictly positive for t≠ s. The covariance R(τ,t):= {ln(r /ε )-1/-1/rτ-t , τ-t≤ε ln(r/τ-t) +τ-t/r-1 , r>τ-t≥. is in fact an absolutely continuous function as a map t↦ R(τ,t) for each τ: when τ-t≤ε, we have the absolutely continuous function g(t)=τ-t, and when τ-t> ε, we use that ln1/x is a differentiable function for x>>0. We compute the partial derivative to be R(τ,t)t= { -1/-1/rt-τ/t-τ , τ-t≤ε -1/t-τt-τ/t-τ +1/rt-τ/t-τ ,r>τ-t≥. . Therefore, for t>τ the derivative is negative R(τ,t)t<0 and for t<τ it is positive R(τ,t)t>0. So it is not continuous on the diagonal, which was one of the constraints in <cit.>. However, in the work <cit.>, they manage to weaken to the following hypotheses that are satisfied in this setting in <ref> For all T>0 the supremum of the integral of the partial derivative is finite for any α≥ 1 sup_s∈ [0,T]∫_0^TR(s,t)t^α<∞ and in fact for any continuous function f we have that s↦ F(s):=∫_0^T f(t)R(s,t)t is continuous on [0,∞). Finally, because of the stationarity the process U_(t) does not necessarily diverge to +∞ as t→ +∞. So that means that if we apply the results from <cit.>, we have to maintain the upper truncation τ_a∧ T. §.§ Regularity of U_ϵ(τ_a) and the inverse In this section we discuss the Malliavin differentiability situation for U_ϵ(Q_(a)) and for the inverse Q(x), in the limit ϵ=0. For the stopped process there is generally a lack of Malliavin differentiability. For example, for Brownian motion consider any stopping time T eg. the hitting time T=T_a of the integrated Geometric Brownian motion of level a>0 ∫_0^T_ae^B_s-1/2s=a. Then the stopped Brownian motion W_T is not Malliavin differentiable (<cit.>). If it was differentiable, we would have that W_T=∫_0^∞1_s≤ TdW_s∈𝔻^1,2 and 1_s≤ T∈𝔻^1,2. However, by <cit.> we would get that for any s≥ 0 either P[s≤ T]=0 or 1, which is a contradiction. On the other hand, for the inverse for ϵ>0, there are some results. The Malliavin derivative for increasing integral processes has been studied in <cit.>. <cit.> Let A_t_t∈ [0,1] be a continuous process such that: * Strictly positive A_t>0 for all t∈ [0,1]. * There exists a version of A such that for all h∈ H, the map (λ, t)↦ A_t(ω+λ h) is continuous. * Finite negative moments sup_t∈ [0,1]A_t^-1∈ L^p for p≥ 2. * Finite Malliavin derivative moments: A∈ L^p([0,1];^1,p) for p≥ 2. For fixed constant c>0 consider the hitting time of the integrated process T_c:=inft>0: ∫_0^tA_s≥ c. Then we have T_c∈^1,p for p≥ 2 with Malliavin derivative DT_c=-1/A_T_c∫_0^TDA_rT_c<1. In our case we have A_t:=:γ U_ϵ(t): satisfies all the above assumptions. However, the fraction -1/A_T_c=-γ U_ϵ(T_c)+γ^2/2ln1/ is likely diverging because for c≈ 0 we have T_c≈ 0 yet the expectation at zero diverges -γ U_ϵ(0)+γ^2/2ln1/=γ^2ln1/=^-γ^2→ +∞. So likely the above formula will not make sense in the limit → 0. This lack of differentiability also appears in the works <cit.>, nevertheless through mollification they manage to extract some interesting formulas that we will try to mimic for the setting of GMC. We apply this first step to the inverse and to match notation write τ_a:=Q_(a) and also suppress the in η(θ):=η_(θ). We use the same regularization. Suppose that ϕ is a nonnegative smooth function with compact support in (0,+∞) and define for any T > 0 Y:=∫_0^∞ϕ(a)τ_a∧ T . The next result states the differentiability of the random variable Y in the sense of Malliavin calculus and provides an explicit formula for its derivative. The derivative for the mollified inverse Y is D_rY= -γ∫_0^Tϕ(η(θ))∫_0^θ[0,s](r)(s)=-γ∫_η(r)^η(T)ϕ(y)y-η(r)_y. As we can see in the above formula we get _y, which by inverse function theorem is equal to e^-γ U_(τ_y)+γ^2/2ln1/ in agreement with the formula <ref>. Due to ϕ's compact support the Y is bounded, and so we can apply Fubini's theorem Y=∫_0^∞ϕ(a)∫_0^τ_a∧ T =∫_0^T∫_η(θ)^∞ϕ(a). So here we need to compute the Malliavin derivative of η(θ). By linearity and chain rule for the derivative operator D we obtain D_t∫_0^xe^γU_(s)-1/2γU_(s)^2= ∫_0^xe^γU_(s)-1/2γU_(s)^2γD_tU_(s) = ∫_0^xe^γU_(s)-1/2γU_(s)^2γ1_[0,s](t) = γη(t,x∨t ). Since >0, we have that η(t,x∨ t )^2<∞ and so η(θ)∈^ 1,2 (this can also work in the limit =0 by taking 2/γ^2>2⇔γ<1). Therefore, by chain rule we get Y ∈^ 1,2 with D_rY=-∫_0^Tϕ(η(θ))D_r(η(θ)) =-∫_0^Tϕ(η(θ))γη(r,θ∨ r ). Finally, making the change of variable η(θ )= y yields D_rY=-γ∫_η(r)^η(T)ϕ(y)y-η(r)_y. § INTEGRATION BY PARTS FORMULA In this section we will obtain an integration by parts formula for η(τ_a,τ_a+L) using the techniques from <cit.>. We apply the Malliavin calculus framework to the Gaussian field U__n for each fixed _n and then at the very end we will take limits _n→ 0 in the integration by parts formulas for η__n(τ__n,a,τ__n,a+L). For simplicity we will temporarily write η=η__n and τ_a=τ__n,a. §.§ Nonlinear expected value For the usual GMC we know that its expected value is linear η(a,b)=b-a. Using the Markovian-like δ-(SMP) property from before, we obtain a nonlinear relation for the expected value of the inverse. We have for a>0 and r≥δ η^δ(Q^δ(a),Q^δ(a)+r)-r=Q^δ(a)-a =∫_0^∞ Q_R(t)^δ(a)≤t ≤Q^δ(a) =∫_0^∞ η^δ(t)≤a ≤η_R(t)^δ(t) >0. In particular, for any a>0 we have Q^δ(a)>a. This proposition shows that the GMC η does not satisfy a "strong" translation invariance i.e. η(Q(a),Q(a)+r)≠ r. So the same is likely true for Q(a,a+t) Q(a,a+t)=∫_0^∞t>η^δ(Q^δ(a),Q^δ(a)+r)≠∫_0^∞t>η^δ(0,r)=Q(t). It also shows that Q^δ(a) is a nonlinear function of a. Ideally we would like to check whether the RHS of <ref> is uniformly bounded in a>0 a>0∫_0^∞η(t) ≤ a≤η_R(t)(t) <∞ =∞, but it is unclear of how the window [η(t) ,η_R(t)(t)] grows as t→ +∞. §.§ Assumptions In the work <cit.>, they make some assumptions about the covariance R(s,t) of the field that are worth comparing with even though we have to do a new proof for η. (H1) For all t∈ [0, T ], the map s↦ R(s, t) is absolutely continuous on [0, T ] and for some α>1 we have sup_s∈ [0,T]∫_0^TR(s,t)t^α<∞. (H3) The function R_t := R(t, t) has bounded variation on [0, T ]. (H5) lim sup_t→+∞ X_t = +∞ almost surely. (H6) For any 0 ≤ s < t, we have X_t - X_s^2 > 0. (H7) For any continuous function f , we have that s↦ F(s):=∫_0^T f(t)R(s,t)t is continuous on [0,∞). Even though our setting is different since we study hitting times of η(t) and not of X_t, these assumptions have analogues. In the <ref> we compute the derivative of R(τ,t):= {ln(r /ε )-1/-1/rτ-t , τ-t≤ε ln(r/τ-t) +τ-t/r-1 , r>τ-t≥. to be R(τ,t)t= { -1/-1/rt-τ/t-τ , τ-t≤ε -1/t-τt-τ/t-τ +1/rt-τ/t-τ ,r>τ-t≥. . and show the assumptions (H1),(H3) and H(7). The assumption (H6) is immediate from the covariance computation. Finally the analogue of the assumption (H6) for η is immediate since it is in fact a strictly increasing function. §.§ Integration by parts formula for truncated hitting time As in these works here too we study the exponential evaluated at the stopping time: M_t+ζ:=λ U__n^δ(t+ζ)- λ^2/2ln1/_n, t,ζ≥ 0 and some λ∈ [0,√(2)). The ζ is important here because we will then integrate over ζ to obtain a formula for η(τ_a,τ_a+L) with a,L≥ 0. The following proposition follows from <cit.> and it asserts that δ_tM:=1/λM_t+ζ-1 satisfies an integration by parts formula, and in this sense, it coincides with an extension of the Skorokhod divergence of M _[0,t]. <cit.> For any smooth and cylindrical random variable of the form F=f(X_t_1,...,X_t_n) for t_i∈ [0,t], we have Fδ_tM=∑_i=1^n∂ f/∂ x_i(X_t_1,...,X_t_n)∫_0^t+ζM_sRs(s,t_i). By writing Y=∫_0^∞ϕ(a)τ_a∧ T=∫_0^T∫_η(θ)^∞ϕ(a), where ϕ∈ C_c^∞(), we will apply <ref> to F:=p(Y-t), where p∈ C_c^∞() and M_t+ζ. In particular, due to the discontinuity of the Rs along the diagonal, we choose p_δ(x-y)=0 when x>y as they do in <cit.>. The following lemma uses the proof structure of <cit.>. We have the integration by parts relation p(Y)δ_tM= - p'(Y)∫_0^Tϕ(η(θ) ) ∫_0^t+ζM_s∫_0^θ Rs(b,s)(b) . = - p'(Y)∫_0^η(T)ϕ(y) ∫_0^t+ζM_s∫_0^y Rs(τ_b,s) _y . The inverse τ_y is a strictly increasing continuous function (even at the limit =0) and so we can define its Riemann-Stieltjes integral. This is because of the a)non-atomic nature of GMC <cit.> and b)GMC;s continuity and strict monotonicity, which in turn follows from satisfying bi-over dyadic intervals <cit.>. The strategy is to discretize the domain [0,T] and thus bring us to the setting of proposition <ref>. Consider an increasing sequence D_N:=σ_i: 0=:σ_0<σ_1<...<σ_N:=T of finite subsets of [0,T] such that their union ⋃_N≥ 1D_N is dense in [0,T]. Set D_N^θ:=D_N∩ [0,θ] with σ(θ):=max(D_N^θ), to let η_N(θ):=η_N(σ(θ)):=∑_k=1^σ(θ)U̅_(σ_k)σ_k-σ_k-1 and Y_N:= ∫_0^Tψ(η_N(θ) ) =∑_m=1^Nψ(η_N(σ_k-1) ) σ_k-σ_k-1. Then, Y_N and p(Y_N) are Lipschitz functions of U_(t) :t ∈ D_N. The partial σ_i-derivative is ∂ (p(Y_N))/∂σ_i=-p'(Y_N)∑_k=i+1^Nϕ(η_N(σ_k-1) )·U̅_(σ_i)σ_i-σ_i-1·σ_k-σ_k-1 and so the formula <ref> implies that p(Y_N)δ_tM= - ∑_i=2^Np'(Y_N)∑_k=i+1^Nϕ(η_N(σ_k-1) )· U̅_(σ_i) σ_i-σ_i-1 ·σ_k-σ_k-1 ∫_0^t+ζM_sRs(σ_i,s) = - p'(Y_N)∑_k=2^Nϕ(η_N(σ_k-1) ) ∫_0^t+ζM_s∑_i=1^k-1U̅_(σ_i) Rs(σ_i,s)σ_i-σ_i-1 σ_k-σ_k-1 . The function r ↦∫_0^t+ζM_sRs(s,r) is continuous and bounded by condition (H1). As a consequence, we can take the N-limit of the above Riemann sum to get the integral formula p(Y)δ_tM=- p'(Y)∫_0^Tϕ(η(θ) ) ∫_0^t+ζM_s∫_0^θU̅_(b)Rs(b,s) . Finally, making the change of variable η(θ )= y yields p(Y)δ_tM= - p'(Y)∫_0^η(T)ϕ(y) ∫_0^t+ζM_s∫_0^τ_y U̅_(b) Rs(b,s) _y = - p'(Y)∫_0^η(T)ϕ(y) ∫_0^t+ζM_s∫_0^y Rs(τ_b,s) _y , where in the last equality we used that η and τ are inverses of each other. §.§ Limits in the Integration by parts relation In this section we set a specific regularization ϕ_(x)=1/_[-1,0](x/) in <ref> Y_,a:=∫_0^∞ϕ_(x-a)(τ_x∧ T)=1/∫_a-^a(τ_x∧ T)=∫_0^1(τ_a-ξ∧ T), where we let τ_x=0 when x<0, and we take limits of ϕ=ϕ_ and p=p_δ as ,δ→ 0. Before that step, since the derivative of the mollification p' will diverge in the limit δ→ 0, we first integrate both sides in <ref> as done in <cit.>. Fix ψ∈ C_c^∞() and set c:=∫_ψ(a). We have the following integration by parts relation ∫_0^∞ψ(a)∫_0^∞p_δ(Y_,a-t)M_t+ζ = c-λ ∫_0^∞∫_0^η(T)∫_0^1ψ(y+w)p_δ(Y_,y-w-t) M_t+ζ∫_0^y Rt(τ_b,t+ζ). By further taking the limits in ,δ→ 0 we obtain the following relation for each T>0 ∫_0^∞ψ(a)M_τ_a∧T+ζ =c-λ ∫_0^η(T)ψ(y)M_τ_y+ζ∫_0^y Rt(τ_b,τ_y+ζ) _y. By integrating over ζ∈ [0,L] we obtain an IBP for shifted-GMC ∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L = c (L-0)-λ ∫_0^L∫_0^η(T)ψ(y)M_τ_y+ζ∫_0^y Rt(τ_b,τ_y+ζ) _y. Continuing from <ref> we rewrite it as ∫_0^∞p_δ(Y_,a-t)M_t+ζ = 1+λ∫_0^∞p_δ(Y_,a-t)δ(M_[0,t+ζ] = 1-λ∫_0^∞p_δ'(Y_,a-t)∫_0^η(T)ϕ_(y-a) ∫_0^t+ζM_s∫_0^y Rs(τ_b,s) _y . Now to remove the p' issue, we do an integration by parts for the integral to obtain 1-λ∫_0^∞p_δ(Y_,a-t)∫_0^η(T)ϕ_(y-a) M_t+ζ∫_0^y Rt(τ_b,t+ζ) _y . We multiply both sides by ψ(a) and integrate over the variable a ∫_ψ(a)∫_0^∞p_δ(Y_,a-t)M_t+ζ = c-λ∫_ψ(a) ∫_0^∞p_δ(Y_,a-t)∫_0^η(T)ϕ_(y-a) M_t+ζ∫_0^y Rt(τ_b,t+ζ) _y. Here for the -integral we use that ϕ_(y-a)=1/_[-1,0](y-a/) to write c-λ ∫_0^∞∫_0^η(T)1/∫_y^y+ψ(a)p_δ(Y_,a-t) M_t+ζ∫_0^y Rt(τ_b,t+ζ) _y. Finally, we do a change of variable a = y + w c-λ ∫_0^∞∫_0^η(T)∫_0^1ψ(y+w)p_δ(Y_,y-w-t) M_t+ζ∫_0^y Rt(τ_b,t+ζ) _y =: c-λ ∫_0^∞∫_0^η(T)F_,δ(y,t) G(t,y)_y for F_,δ(y,t) := ∫_0^1ψ(y+w)p_δ(Y_,y-w-t), G(t,y):= M_t+ζ∫_0^y Rt(τ_b,t+ζ). We next take limits and justify their swapping with the integrals. Limit → 0 We use that the inverse τ_y is a continuous function to take limit Y_,y- w=∫_0^1(τ_y- w-ξ∧ T) = τ_y∧ T and so the limiting w-integral is ∫_0^1ψ(y+w)p_δ(Y_,y-w-t) = ∫_0^1ψ(y)p_δ(τ_yw+τ_y (1-w)-t) = ψ(y)p_δ(τ_y-t). We next justify that we can swap limit and integrals in <ref>. By the compact support and smoothness of ϕ p we have a uniform constant F_,δ(y,t)=∫_0^1ψ(y+ w)p_δ(Y_,y- w-t)≤ K. Moreover,we can assume that compact support is contained suppp_δ⊆ [0,T+δ] and so the infinite integral in <ref> gets restricted to [0,T+δ]. We also use the uniform constant to bound as follows (<ref>) ≤K ∫_0^T+δ∫_0^η(T) G(t,y)_y. Finally, we will need to revert to the previous formula in terms of GMC ∫_0^y Rs(τ_b,s)=∫_0^τ_y Rs(b,s)(b). We put all these together ∫_0^∞∫_0^η(T)F_,δ(y,t) G(t,y)_y ≤ K ∫_0^η(T) _y∫_0^T+δ∫_0^T+δ Rt(b,t+ζ)(b) M_t+ζ = KT∫_ζ^ζ+T+δ∫_0^T+δ Rt(b,t)(b)(t) = KT∫_ζ^ζ+T+δ∫_0^T+δ Rt(b,t), where we also used that τ_y≤ T+δ and applied Fubini-Tonelli to integrate-out the GMCs. This final quantity is indeed finite due to the continuity of the integral as explained in <ref>. Therefore, all together we can use dominated convergence theorem to swap limits and integral (<ref>)= c-λ ∫_0^T+δ∫_0^η(T)ψ(y)p_δ(τ_y-t) M_t+ζ∫_0^y Rt(τ_b,t+ζ) _y. Limit δ→ 0 Here we follow parts of the <cit.>. Here we just use from <ref> that the integral ∫_0^yRt(τ_b,t+ζ)=∫_0^τ_yU̅_^r(b)Rt(b,t+ζ) is continuous in t even if ζ=0 but as long as _n>0. Therefore, we can take the limit in δ→ 0. Now in terms of using dominated convergence theorem, we use the same dominating factor as above. In summary we get the following limit δ(<ref>)= c-λ ∫_0^η(T)ψ(y)M_τ_y+ζ∫_0^y Rt(τ_b,τ_y+ζ) _y. § FORMULA FOR THE SHIFTED GMC In this section we use the IBP formula in <ref> to obtain a formula for the shifted GMC and the expected value of the hitting time. We will work with field U_ε^r for r>>0 and ζ>0. As mentioned in <ref> we already have one formula. By integrating over ζ∈ [0,L] we obtain an IBP for shifted-GMC ∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L = c L-λ ∫_0^L∫_0^η(T)ψ(y)M_τ_y+ζ∫_0^y Rt(τ_b,τ_y+ζ) _y. In the rest of the section we try to simplify this formula. §.§ Limit in → 0 for fixed ψ In the <ref>, ideally one would like to investigate taking → 0 and having the support of the ψ=ψ_n to be approximating to a point a_0. Assuming one can swap limits with integrals one would get the following formula ητ_a_0∧T,τ_a_0∧T+L = c L-λ ∫_0^LM_τ_a_0+ζ∫_0^a_0 Rt(τ_b,τ_a_0+ζ) 1/M_τ_a_0, where the factor 1/M_τ_a_0 originated from the formal limit of _y/=e^-U̅_^r(τ_y). The issue here is that this latter limit doesn't exist because the normalization is reversed (the same is true even for the field e^-U̅_^r(s) over deterministic s since its mean is diverging like ^-γ^2.) Therefore, we will study the IBP formula for fixed ψ and → 0. For fixed ψ∈ C_c() where we normalize ∫_ψ(a)=1, we have the relation ∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L = L+λ ∫_0^r∫_ζ^ζ+Tψ(η(θ-ζ))∫_ (θ-r)∨0^θ1/θ-t-1/r(t) (θ), where the GMCs have the field with =0. For simplicity we take T≥ 1>>0. One corollary is the inequality ∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L ≥L. Here we can actually take limit of ψ=ψ_n whose support is converging to a fixed value a_0, to get the inequality ητ_a_0,τ_a_0+L ≥L, which agrees with the result in <ref>. §.§.§ Proof of <ref> We start by writing the IBP formula explicitly using the covariance function. Using the explicit formula of the covariance we have the expression ∫_0^y Rt(τ_b,τ_y+ζ)=-∫_a^b1/τ_y+ζ-t-1/r(t) -1/-1/r b, τ_y, for a:= τ_y∧(τ_y+ζ-r)∨0b:= τ_y∧(τ_y+ζ-)∨0. For ease of notation in the proof we let s:=τ_y+ζ and a:= τ_y∧ (s-r)∨ 0, b:= τ_y∧ (s-)∨ 0, c:= τ_y∧ (s+) d:= τ_y∧ (s+r). Using the explicit formula for the partial derivative in <ref> we have the following ∫_0^τ_y U̅_^r(t) Rs(t,s) = ∫_a^b-1/s-t(t)+ 1/r a,b + ∫_c^d1/t-s(t)+ -1/r c,d+1/-1/r s∧τ_y,c-b,s∧τ_y. For s=τ_y+ζ we have a= τ_y∧ (τ_y+ζ-r)∨ 0, b= τ_y∧ (τ_y+ζ-)∨ 0, c:= τ_y d:= τ_y. Therefore, the above simplifies ∫_0^τ_y U̅_(t) Rs(t,s)|_s=τ_y+ζ = ∫_a^b-1/s-t(t)+ 1/r a,b +0+ -1/r ·0+1/-1/r 0-b, τ_y = -∫_a^b1/s-t-1/r(t) -1/-1/r b, τ_y. Returning to <ref> we write (<ref>)= ∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L = c L-λ ∫_0^L∫_0^η(T)ψ(y)M_τ_y+ζ-∫_a^b1/τ_y+ζ-t-1/r(t) -1/-1/r b, τ_y _y = c L-λ ∫_0^L∫_0^Tψ(η(θ))M_θ+ζ-∫_a^b1/θ+ζ-t-1/r(t) -1/-1/r b, θ , where we also undid the change of variables τ_y=θ y=η(θ), and let a:= θ∧(θ+ζ-r)∨0b:= θ∧(θ+ζ-)∨0. Taking → 0 on the LHS is clear since ψ is compactly supported and bounded. The question is what happens in the RHS. We study each term. We have the limit ∫_0^L∫_0^Tψ(η(θ))M_θ+ζ -1/-1/r b, θ =0. In the term b, θ, since b:= θ∧ (θ+ζ-)∨ 0, we have that as soon as ζ≥, we get identically zero b, θ =0 for every > 0. So we just study the integrals ∫_0^∫_0^Tψ(η(θ))M_θ+ζ -1/-1/r (θ+ζ-)∨0, θ = -1/-1/r ∫_0^ ∫_ζ^ζ+Tψ(η(θ-ζ))(θ-)∨0, θ-ζ (θ). Here we can apply Lebesgue differentiation theorem. We study the difference of functions f(ζ)-g_(ζ):= ∫_ζ^ζ+Tψ(η(θ-ζ))0, θ-ζ (θ)- ∫_^ζ+Tψ(η(θ-ζ))0,θ- (θ). In the first function by taking limit → 0 we get _0^ f(ζ)→f(0)= ∫_0^Tψ(η(θ)) 0,θ (θ). In the second function, we separate the two limits _0^ ∫_ζ^ζ+Tψ(η(θ-ζ))0,θ (θ)+_0^ ∫_^ζ+Tψ(η(θ-ζ))0,θ--0,θ (θ). The first term converges to the same limit as in <ref> and so they cancel out. Therefore, it suffices to show that the second term in <ref> goes to zero. We pull out the supremum _0^ ∫_^ζ+Tψ(η(θ-ζ))0,θ--0,θ (θ) ≤ _0^sup_≤z≤+Tz-,z ·∫_ζ^ζ+Tψ(η(θ-ζ)) (θ). The quantity inside the expectation is uniformly bounded in because we can use to separate them sup_≤z≤+Tz-,z^2^1/2· ∫_ζ^ζ+Tψ(η(θ-ζ)) (θ)^2^1/2, where due to <ref> the first factor goes to zero as → 0. We return to take the limit → 0 in <ref> (<ref>)= ∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L = c L-λ ∫_0^L∫_ζ^ζ+Tψ(η(θ-ζ))-∫_a^b1/θ-t-1/r(t) (θ), for a:= θ-ζ∧(θ-r)∨0b:= θ-ζ∧(θ-)∨0. We note here that if ζ≥ r, then we get a=θ-ζ=b and so the inner integral becomes zero. So we are left with ∫_0^r∫_ζ^ζ+Tψ(η(θ-ζ))-∫_ (θ-r)∨0^ b1/θ-t-1/r(t) (θ). The following lemma concludes the proof of <ref>. We have the limit ∫_0^r∫_ζ^ζ+Tψ(η(θ-ζ))∫_ (θ-r)∨0^θ-ζ∧(θ-)∨01/θ-t-1/r(t) (θ) = ∫_0^r∫_ζ^ζ+Tψ(η(θ-ζ))∫_ (θ-r)∨0^θ1/θ-t-1/r(t) (θ). A a heuristic we study the integrals without any GMCs: ∫_0^r∫_ζ^ζ+T∫_ (θ-r)∨0^θ-ζ1/θ-t-1/r = ∫_0^r∫_0^T∫_ (θ+ζ-r)∨0^θ1/θ+ζ-t -rT-r/6 = ∫_0^r∫_0^T ln1/ζ-ln1/r∧θ+ζ -rT-r/6 = -rln1/r1-3r/2-rT-r/6. So we see that even for r→ 0 we still have finiteness in the limit → 0. [proof of <ref>] We will apply dominated convergence theorem. In terms of limits we study the inner integrals f(ζ):=∫_ζ^ζ+Tψ(η(θ-ζ))∫_ (θ-r)∨0^θ-ζ∧(θ-)∨01/θ-t-1/r(t) (θ) Since we have fixed ψ and it has compact support, we get that it is bounded and so we upper bound f(ζ)≤ K ∫_ζ^ζ+T∫_ (θ-r)∨0^θ-ζ∧(θ-)∨01/θ-t(t) (θ) = K∫_ζ^ζ+T ∫_ (θ-r)∨0^θ-ζ∧(θ-)∨01/θ-t^1+γ^2 ⪅ T/ζ^γ^2, where we evaluate the correlation for the two GMCs. This factor is still integrable as long as γ^2<1. Therefore, we can indeed apply the dominated convergence theorem. §.§.§ IBP Formula for inverse We justify taking infinite limit T→ +∞. The finite T limit of <ref> is ∫_0^∞ψ(a)ητ_a,τ_a+L = L+λ ∫_0^r∫_0^∞ψ(η(θ))∫_ (θ+ζ-r)∨0^θ+ζ1/θ+ζ-t-1/r(t) _ζ(θ), where we used the notation _ζ(θ):=e^U^r_0(ζ+θ). Therefore, for L≥ r we use to <ref> obtain the following formula for the expected value of the inverse. The inverse satisfies the following integration by parts formula ∫_0^∞ψ(a)τ_a = ∫_0^∞ψ(a) a+λ ∫_0^r∫_0^∞ψ(η(θ))∫_ (θ+ζ-r)∨0^θ+ζ1/θ+ζ-t-1/r(t) _ζ(θ). [proof of <ref>] Since ψ is compactly supported supp(ψ)⊂ [0,S] for some S>0 we get that the integral is zero as soon as η(θ)>S. So for the LHS in <ref> we have ∫_0^Sψ(a)ητ_a∧T,τ_a∧T+L . Since the shifted GMC ητ_a∧ T,τ_a∧ T+L is continuous and uniformly bounded in T ητ_a∧T,τ_a∧T+L≤η0,τ_a+L, we can apply dominated convergence theorem. For the RHS we start by undoing the change of variables θ↔τ_y to write (<ref>)=L+λ ∫_0^r∫_0^η(T)∧Sψ(y)∫_ (τ_y+ζ-r)∨0^τ_y+ζ1/τ_y+ζ-t-1/r(t) e^U_τ_y+ζ. Here we use the following limiting ergodic statements for GMC <cit.>. Let M be a stationary random measure on admitting a moment of order 1+δ for δ>0. There is a nonnegative integrable random variable Y∈ L^1+δ such that, for every bounded interval I⊂, lim_T →∞1/T M(T I) = Y |I| almost surely and in L^1+δ, where |·| stands for the Lebesgue measure on . As a consequence, almost surely the random measure A∈ℬ()↦1/TM(TA) weakly converges towards Y|·| and _Y[M(A)]=Y |A| (_Y[·] denotes the conditional expectation with respect to Y). For GMC the Y variable is equal to one Y=1. One way to see it is using the independence of distant GMCs. By splitting η^1(0,n)/n into alternating even and odd intervals [k,k+1] to get two independent sequences and then apply strong law of large numbers to get convergence to η^1(0,n)/na.s.→1/2η^1(0,1)+1/2η^1(1,2)=1. Therefore, since the quantity is uniformly bounded in T by bounding by the integral over ∫_0^S, we can apply dominated convergence theorem. PART: Further directions and Appendix § FURTHER RESEARCH DIRECTIONS * Joint law for the Liouville measure The density of the inverse is in terms of two-point joint law of GMC: b≥ Q(x)≥ a=η(b)≥ x≥η(a). (Of course, if we have differentiability, we can just study η(b)≥ x). The same issue showed up when studying the decomposition of the inverse. For example, we could turn the conditional moments' bounds into joint law statements by rewriting the event Q(a)-Q(b)=ℓ in terms of η. Some approaches include conformal field theory in <cit.> and possibly Malliavin calculus <cit.>. See here for work on GMC and Malliavin calculus <cit.>. It would also be interesting to get bounds on the single and joint density of GMC using the Malliavin calculus techniques in <cit.>. In the same spirit as in <cit.>, one can try to Goldie-renewal result: see <cit.> for recent work extending the Goldie renewal result used in <cit.> to the case of joint law. * Regularity for GMC's Malliavin derivative It would be interesting to explore the regularity of the Malliavin derivative D^kη for k=k(γ) as γ→ 0. This can give different upper bounds for the density: Let q, α, β be three positive real numbers such that 1/q+1/α+1/β=1. Let F be a random variable in the space ^2,α, such that DF_H^-2β < ∞. Then the density p(x) of F can be estimated as follows p(x)≤ c_q, α, βF>x^1/qDF_H^-1+D^2F_L^α(Ω;H⊗ H)DF_H^-2β^1/β, where u_L^α(Ω;H⊗ H):= u_H⊗ H^α^1/α. * Derivatives in the IBP-formula In the spirit of the derivative computations done in <cit.>, one could try to extract some pdes/odes. We included some some heuristics computations for M_τ_a_0 :=e^λU_τ_a_0-λ^2/2ln1/. In <ref>, we can concentrate ψ around the point a_0 and use <ref> to get the identity Ψ(a,λ):=M_τ_a_0 = 1+λ M_τ_a_0 ∫_ (τ_a_0 -r)∨0^ (τ_a_0 -)∨01/τ_a_0 -t-1/r(t) +λ1/-1/r M_τ_a_0 (τ_a_0 -)∨0,τ_a_0 . So the λ derivative of the LHS is: Ψ(a,λ)λ= M_τ_a U_(τ_a) -λ/2M_τ_a ln1/ = 1/λM_τ_a lnM_τ_a and of the RHS is ψ(a,λ)λ=Ψ(a,λ)λ= M_τ_aF(a)+λM_τ_a U_(τ_a)F(a) -λ^2/2M_τ_a F(a) ln1/ = M_τ_a1+lnM_τ_a F(a) = ψ(λ,a) F(a)+ψ(λ,a)lnψ(λ,a) F(a), where ψ(λ,a):=M_τ_a. So one ODE from here is y'=y(1+ln(y))c , y(0)=1 which has the unique solution y(λ)=c^2/2-1 . The Ψ itself satisfies Ψ(a,λ)λ= M_τ_aF(a)+λM_τ_a U_(τ_a)F(a) -λ^2/2M_τ_a F(a) ln1/ = 1/λΨ(a,λ)-1+M_τ_a lnM_τ_aF(a). The identity is M_τ_a = 1+λ M_τ_a F(a). The derivative of the LHS is M_τ_a a = λ M_τ_a U_(x)x|_x=τ_aτ_aa. The derivative of the RHS is M_τ_a a = λ M_τ_a U_(x)x|_x=τ_aτ_aa F(a) +λ M_τ_a F(a)a , where F(a)a= a∫_0^a Rt(τ_b,τ_a)= Rt(τ_b,τ_a)|_b=a+ ∫_0^a ^2Rt_1∂t_2(τ_b,τ_a)τ_aa. § MOMENTS OF THE MAXIMUM AND MINIMUM OF MODULUS OF GMC In this section we study tail estimates and small ball estimates of the maximum/minimum of shifted GMC from <cit.>. One frequent theme is utilizing the 1d-correlation structure of GMC namely that neighboring evaluations η[0,1],η[1,2],η[2,3],η[3,4] are correlated. But the pairs η[0,1],η[2,3] and η[1,2],η[3,4] are separately . First we study the tail and moments of the maximum of the modulus of GMC. On the face of it, in studying the 0≤ T≤ LT,T+xδ, we see that it could diverge as δ,x→ 0 because we might be able to lower bound it by an increasing sequence of iid random variables such as kx,x(k+1) for k∈ [1,L/x]. We will see that at least for fixed δ>0, we actually do have decay as x→ 0. This is in the spirit of chaining techniques where supremum over a continuum index set is dominated in terms of a maximum over a finite index set.   We will also need an extension for a different field: for λ<1, the field U_ε^δ, λ with covariance U_ε^δ,λ(x_1 )U_ε^δ,λ(x_2 ) ={ln(δ/ε )-1/-1/δx_2-x_1+(1-λ)(1-x_2-x_1/δ) x_2-x_1≤ε   ln(δ/x_2-x_1)-1+x_2-x_1/δ+(1-λ)(1-x_2-x_1/δ) ≤x_2-x_1≤δ/λ   0 δ/λ≤x_2-x_1. . Moments p∈ [1,2/γ^2) For L,δ,x≥ 0 and δ≤ 1 we have T∈[0,L] T,T+xδ^p≤ cx^α(p)L/x+1^p/r_p≤ c(1+L+x)^p/r_p x^α(p)-p/r_p, where α(p)=ζ(p) when x≤ 1 and α(p)=p when x≥ 1, and the r_p>0 is an arbitrary number in p<r_p<2/γ^2. For simplification, we will also write p/r_p=p(γ^2/2+_p) for small enough _p>0. The same estimate follows for the measure η^δ,λ when x≤δ. Moments p∈ (0,1)Here we have T∈[0,L] T,T+xδ^p ⪅(1+L+x)^1/r_1 x^1-1/r_1^p, where as above 1<r_1<2/γ^2 and let c_1:=r_1-1/r_1=1-β- for arbitrarily small >0. In <ref>, we see that when α(p)-p/r_p>0, it decays to zero as x→ 0. By taking r_p≈2/γ^2, that means we require ζ(p)-p/r_p≈ pγ^2/2(2/γ^2-p)>0. Also, one can check that this exponent is a bit better than that given in <cit.> for general stochastic processes. Next we study the negative moments for the minimum of the modulus of GMC. We have for p>0 T∈[0,L] T,T+xδ^-p⪅ x^a_δ(-p)L/x+2^p/r2^-ζ(-r)p/r, where a_δ(-p):=ζ(-p) when x≤δ and a_δ(-p):=-p when x≥δ and r>0 satisfies p/r<1 and so for simplicity we take arbitrarily small _p:=p/r>0. The same follows for the measure η^δ,λ and x≤δ. Here we note that as r→ +∞, the constant 2^-ζ(-r)p/r diverges. So the smaller _p:=p/r>0, the larger the comparison constant. § PROPETIES OF THE COVARIANCE OF TRUNCATED FIELD §.§ Regularity of the covariance The following are some of the hypotheses used in the development of Malliavin calculus for Gaussian processes <cit.>. The difference is U_ε^r(t)-U_ε^r(s)^2 =2t-s/(1-/r), which is strictly positive for t≠ s. The covariance R(τ,t):= {ln(r /ε )-1/-1/rτ-t , τ-t≤ε ln(r/τ-t) +τ-t/r-1 , r>τ-t≥. is in fact an absolutely continuous function as a map t↦ R(τ,t) for each τ: when τ-t≤ε, we have the absolutely continuous function g(t)=τ-t, and when τ-t> ε, we use that ln1/x is a differentiable function for x>0. We compute the partial derivative to be R(τ,t)t= { -1/-1/rt-τ/t-τ , τ-t≤ε -1/t-τt-τ/t-τ +1/rt-τ/t-τ ,r>τ-t≥. . Therefore, for t>τ the derivative is negative R(τ,t)t<0 and for t<τ it is positive R(τ,t)t>0. So it is not continuous on the diagonal, which was one of the constraints in <cit.>. However, in the work <cit.>, they manage to weaken to the following hypotheses that are satisfied here. For all T>0 the supremum of the integral of the partial derivative is finite for any α≥ 1 sup_s∈ [0,T]∫_0^TR(s,t)t^α<∞ with a bound that diverges as T→ +∞ or → 0. In fact for any continuous function f we have that s↦ F(s):=∫_0^T f(t)R(s,t)t is continuous on [0,∞) as long as >0. Finite integral: proof of <ref> Case α=1 Because for s-t≥ r, we have zero covariance, we restrict the integral to the domains [(s-r)∨ 0,(s-)∨ 0] ∪ [(s-)∨ 0,s]∪ [s,(s+)∧ T]∪ [(s+)∧ T,(s+r)∧ T]. In the domain [(s-r)∨ 0,(s-)∨ 0], we have t<s and s-t> and so R(s,t)t=1/s-t-1/r and the integral will be ∫_(s-r)∨ 0^(s-)∨ 01/s-t-1/r=ln(r∧ s/∧ s)-1/rs∧ r-s∧ Similarly, in the domain [(s+)∧ T,(s+r)∧ T], we have R(s,t)t=-1/t-s-1/r=1/t-s-1/r and the integral will be ln(r∧ (T-s)/∧ (T-s))-1/r(T-s)∧ r-(T-s)∧ In the domain [(s-)∨ 0,s], we have R(s,t)t=1/-1/r=:c_,r and similarly, in [s,(s+)∧ T] we again have R(s,t)t=-1/-1/r=:c_,r. Therefore, the total integral will be ln(r∧ s/∧ s)-1/rs∧ r-s∧+ln(r∧ (T-s)/∧ (T-s))-1/r(T-s)∧ r-(T-s)∧+c_,r(s+)∧ T-(s-)∨ 0. So we see from here that as → 0, this integral diverges. The log-terms are the only source of potential singularity. When s is close to zero i.e. r>s> or ≥ s, we get ln(s/) and ln(s/s)=0 respectively. When s is close to T i.e. r>T-s> or ≥ T-s, we similarly get ln(T-s/) and ln(T-s/T-s)=0 respectively. Therefore, we indeed have a finite supremum for each T>0. Case α>1 Here instead of logarithms we get singular terms of the form 1/x^α-1. In particular following the same integration steps on splitting domains we get singular terms of the following form: 1/(r∧ s)^α-1-1/(∧ s)^α-11/(r∧ (T-s))^α-1-1/(∧ (T-s))^α-1. When s is close to zero i.e. r>s> or ≥ s, we get 1/r^α-1-1/^α-1 and 1/s^α-1-1/s^α-1=0 respectively. For s close to T, we conversely get 1/r^α-1-1/^α-1 and 1/(T-s)^α-1-1/(T-s)^α-1=0. We always get a singular power in >0. In summary, we again have a finite supremum for each T>0 and >0. The continuous weighted derivative: proof of <ref> We split over the same domains. We end up with the following total integral ∫_(s-r)∨0^(s-)∨0f(t)/s-t+ (-1/r) ∫_(s-r)∨0^(s-)∨0f(t)+ ∫_(s+)∧T^(s+r)∧Tf(t)/t-s+ (-1/r) ∫_(s+)∧T^(s+r)∧Tf(t) +c_,r ∫_(s-)∨0^(s+)∧Tf(t). The integrals containing only the continuous function f(t) are differentiable in s due to the fundamental theorem of calculus. In particular, the function g(t)=1/s-t is continuously differentiable in the above domains because they don't contain an -neighbourhood of the singularity t=s. Therefore, the integrals with integrands f(t)/s-t are differentiable due to Leibniz-rule. Case of → 0 and large T Here we get ∫_(s-r)∨0^sf(t)/s-t+ (-1/r) ∫_(s-r)∨0^sf(t)+ ∫_s^s+rf(t)/t-s+ (-1/r) ∫_s^s+rf(t) +1/-1/r ∫_(s-)∨0^(s+)∧Tf(t). [title=Whole bibliography]
http://arxiv.org/abs/2307.05477v1
20230711175934
Wiedemann-Franz law in graphene in the presence of a weak magnetic field
[ "Yi-Ting Tu", "Sankar Das Sarma" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
Condensed Matter Theory Center and Joint Quantum Institute, Department of Physics, University of Maryland, College Park, Maryland 20742, USA The experimental work [J. Crossno et al., https://doi.org/10.1126/science.aad0343Science 351, 1058 (2016)], which reported the violation of the Wiedemann-Franz law in monolayer graphene characterized by a sharp peak of the Lorenz ratio at a finite temperature, has not been fully explained. Our previous work [Y.-T. Tu and S. Das Sarma, https://doi.org/10.1103/PhysRevB.107.085401Phys. Rev. B 107, 085401 (2023)] provided a possible explanation through a Boltzmann-transport model with bipolar diffusion and an energy gap possibly induced by the substrate. In this paper, we extend our calculation to include a weak magnetic field perpendicular to the graphene layer, which is experimentally relevant, and may shed light on the possible violation or not of the Wiedemann-Franz law. We find that the magnetic field enhances the size of the peak of the Lorenz ratio but has little effect on its position, and that the transverse component of the Lorenz ratio can be either positive or negative depending on the parameter regime. In addition, we do the same calculation for bilayer graphene in the presence of a magnetic field and show the qualitative similarity with monolayer graphene. Our work should motivate magnetic-field-dependent experiments elucidating the nature of the charge carriers in graphene layers. Wiedemann-Franz law in graphene in the presence of a weak magnetic field Sankar Das Sarma ======================================================================== § INTRODUCTION In our previous work <cit.>, we proposed a simple theory that qualitatively explains the apparent violation of the Wiedemann-Franz (WF) law in monolayer graphene (MLG) reported in the experimental work of Crossno et al. <cit.>. The WF law states that the Lorenz number L=κ/(σ T) is a universal constant in metals, L_0=π^2/3(k_B/e)^2 <cit.>. Here, κ and σ are the thermal and electrical conductivities of the charge carrier, k_B and e are the Boltzmann constant and the electron charge, respectively. This law is largely satisfied by normal metals described by Fermi liquids, but can sometimes be violated due to inelastic scattering effects <cit.> or the bipolar diffusion effect <cit.>. In Ref. <cit.>, the authors reported a large violation of the WF law in MLG characterized by a high peak of L/L_0∼ 20 at a finite temperature T∼ 60 K, and attributed this violation to the “non-Fermi liquid” hydrodynamic effect of the quantum Dirac fluid nature of intrinsic graphene. However, even with six free parameters, the finite temperature peak cannot be well-explained with the hydrodynamic theory <cit.>, and that experimental observation in Ref. <cit.> remains unexplained. In Ref. <cit.>, we proposed an alternative but much simpler theory based on Boltzmann transport theory, where the scattering by short- and long-range impurities and acoustic phonons are treated phenomenologically. No Fermi liquid violations or exotic interaction effects were included in our conventional Boltzmann theory in Ref. <cit.>. We demonstrated that the bipolar diffusion effect, arising from the thermally induced electrons and holes around the Dirac point, produces a finite-temperature peak of L/L_0, which can be very high if we assume an (unintentional and uncontrolled) energy gap opening at the Dirac point, which is possible in experiments due to the presence of the hBN substrate underlying the graphene layers <cit.>. However, we do not claim that this theory unambiguously explains all of the experimental observations, and in particular, the presence of a gap at the Dirac point must be validated for our theory to explain the finite temperature peak reported in Ref. <cit.>. Indeed, it is unknown whether the hBN substrate really induces such a gap, and if so, what the size of the gap and the shape of the band near the gap could be. More experimental work is needed to settle down the best explanation of that observation. In particular, it is important that the findings of Ref. <cit.> are reproduced or revised experimentally with more data so that we have a more complete picture of the situation. One way to extend the original experiment <cit.> is to add a magnetic field perpendicular to the graphene surface <cit.>. In this way, many qualitative behaviors can be checked with the theory. Will the finite-temperature peak of L/L_0 be enhanced or suppressed by the magnetic field? Will the position of the peak shift? In addition, the magnetic field creates transverse motions of the electrons and holes, which give more complex features such as the possible change in the sign of the transverse component of the Lorenz ratio. But, adding a magnetic field also complicates the physics because now there are three independent parameters controlling the system: temperature, doping, and magnetic field. In addition, there could be the fourth additional uncontrolled parameter associated with the energy gap. This paper is the follow-up to and extension of Ref. <cit.>. We include a weak magnetic field in the Boltzmann transport theory of Ref. <cit.>, while keeping everything else the same. We find that the size of the finite-temperature peak of L/L_0 is enhanced by the magnetic field, while the position does not change much. In addition, the sign of the transverse component can either be positive or negative depending on the parameter regime. In addition, we repeat the same calculation for bilayer graphene (BLG), which is also experimentally relevant, and find similar qualitative results. These observations can be tested experimentally to provide a step towards explaining the intriguing observations in Ref. <cit.>. The rest of this paper is organized as follows: In Sec. <ref>, we present the setup of our theory, that is, the Boltzmann transport theory with magnetic field and bipolar diffusion included. In Secs. <ref> and <ref>, we present the models and results for MLG and BLG, respectively. We conclude this paper in Sec. <ref>. § THEORY Our starting point is the Boltzmann equation ∂ f/∂ t + 𝐫̇·∂ f/∂𝐫 + 𝐤̇·∂ f/∂𝐤 = ℐ{f} , where f(𝐫,𝐤,t) is the distribution of electron wave packets at position 𝐫, wavevector 𝐤, and time t. The semiclassical equations of motion are 𝐫̇ =1/ħ∂ε/∂𝐤-𝐤̇×Ω 𝐤̇ =-e/ħ(𝐄+𝐫̇×𝐁) , where 𝐄 (𝐁) is the applied electric (magnetic) field and Ω is the Berry curvature. We assume that f is time-independent (steady state) and f=f_0+δ f for a small perturbation δ f near the local equilibrium f_0(𝐫,𝐤) = 1/expε(𝐤)-μ(𝐫)/T(𝐫)+1, In this paper, we measure energies directly in the units of temperature (Kelvins), so Boltzmann's constant k_B equals unity in the formulas. In the case that the magnetic field is weak in the sense that the cyclotron radius is much larger than the Fermi wavelength, we can neglect Landau quantization as well as the effect of the Berry curvature. We restrict our theory entirely to the weak field semiclassical regime so that the magnetic field only adds a transverse force on the carriers without affecting anything else. In the linear response regime of the applied electrochemical force ℰ=E+1/e∇μ and temperature gradient ∇ T, the Boltzmann equation can be linearized as <cit.> 𝐯·(eℰ+ε-μ/T∇ T)(-∂ f_0/∂ε)-e/ħ𝐯×𝐁·∂δ f/∂𝐤 = -δf/τ, where 𝐯=1/ħ∂ε/∂𝐤 is the velocity (𝐯=v𝐤̂ for a scalar function v(k) in our case), and we have used the relaxation time approximation for the collision term with relaxation time τ, which may depend on ε as well as T. We only consider the case where 𝐁=B_z 𝐳̂ is perpendicular to the surface of the material. Since our system is rotationally symmetric, without any loss of generality, the transport coefficients can be calculated by assuming ℰ and ∇ T to be in the 𝐱̂ direction, and then solving the differential equation for δ f, and by plugging into the expressions for electrical and thermal currents: J_x,y = -eg_sg_v∫d^2k/(2π)^2 v_x,yδ f = ℰ_x(L_EE)_xx,yx + ∇ T_x(L_TE)_xx,yx Q_x,y = g_sg_v∫d^2k/(2π)^2 v_x,y (ε-μ) δ f = ℰ_x(L_ET)_xx,yx + ∇ T_x(L_TT)_xx,yx to extract the coefficients (here we restrict ourselves to a single band, and the degeneracies are g_v=g_s=2 in our case). The resulting formulas are [The factor of 1/2 was missing in Ref. <cit.>, but is irrelevant in the calculation of the Lorenz ratio.] (L_EE)_xx = e^2/2∫ dε(-∂ f_0/∂ε)D τ v^2/1+(eτ v B_z/ħ k)^2 (L_EE)_yx = e^2/2∫ dε(-∂ f_0/∂ε)D (eτ v B_z/ħ k)τ v^2/1+(eτ v B_z/ħ k)^2 (L_TE)_xx = -e/2∫ dε(-∂ f_0/∂ε)D τ v^2/1+(eτ v B_z/ħ k)^2(ε-μ) (L_TE)_yx = -e/2∫ dε(-∂ f_0/∂ε)D (eτ v B_z/ħ k)τ v^2/1+(eτ v B_z/ħ k)^2(ε-μ) (L_TT)_xx = -1/2T∫ dε(-∂ f_0/∂ε)D τ v^2/1+(eτ v B_z/ħ k)^2(ε-μ)^2 (L_TT)_yx = -1/2T∫ dε(-∂ f_0/∂ε)D (eτ v B_z/ħ k)τ v^2/1+(eτ v B_z/ħ k)^2(ε-μ)^2 and that L_ET=-1/TL_TE, where D is the density of states. The set of equations defined by Eq. (<ref>) are the finite-magnetic-field generalization of the basic Boltzmann transport theory for our problem. Now the above can be calculated for each band, and the total transport coefficients are the sums of them (we do not consider interband scatterings here). The calculation is done with fixed carrier density n, where the chemical potential μ is obtained self-consistently n=∫_+ dε D_+f_0-∫_- dε D_-(1-f_0) . where the range of the first (second) integral is in the conduction (valance) band, and ± denote the band indices. Now the electrical and thermal conductivity matrices are σ = L_EE κ = L_TT - L_TEL_EE^-1L_ET. Here, the bipolar diffusion effect is automatically included. We define the effective Lorenz number componentwise L_xx = κ_xx/σ_xxT, L_xy = κ_xy/σ_xyT. For both components, the Lorenz number equals L_0=π^2/3e^2 in the regime where the Wiedemann-Franz law is satisfied, so below we will present the results using the Lorenz ratio L_xx,xy/L_0. Note that such a componentwise treatment for the Lorenz ratio in the presence of a magnetic field was used also in hydrodynamic theory <cit.>. § MONOLAYER GRAPHENE The MLG is typically modeled by linearly dispersive gapless conduction and valance bands <cit.>. However, we consider the possibility of a gap opening as in Ref. <cit.>, which may be due to the hBN substrate <cit.>. Since the exact behavior near the gap is unknown, we use the simplest model for the gap as in Ref. <cit.>, ε_+(𝐩) =+ v_F |𝐩| ε_-(𝐩) =-v_F |𝐩|-Δ , where Δ is the size of the gap and v_F∼ 1×10^6 m/s is the Fermi velocity of graphene. The subscripts label the conduction (+) and the valance (-) band. The density of states is (including the spin degeneracy g_s=2 and valley degeneracy g_v=2) D_+(ε) =2ε/πħ^2 v_F^2 for ε>0 D_-(ε) =2(-Δ-ε)/πħ^2 v_F^2 for ε<-Δ . For the relaxation time τ, it is known that the dominant transport mechanisms in graphene are the scattering by short-range disorder, long-range disorder, and acoustic phonon <cit.>. As in Ref. <cit.>, we consider only these three mechanisms, using the phenomenological model derived from Refs. <cit.>: τ_+(ε)=1/Aε+BTε+C/ε, τ_-(ε)=τ_+(-Δ-ε). Here the parameters A, B, C represent the scattering strengths of short-range disorder, acoustic phonon, and long-range Coulomb disorder, respectively (the magnetic field is denoted by “B_z” to avoid confusion with the coefficient “B” here). At zero magnetic field, only the ratios between the parameters affect the Lorenz ratio. Although scaling τ by a constant affects the Lorenz ratio at nonzero B_z, it only changes the unit of it. Since the experimental value of these parameters is unknown, we will just choose one set of typical (A/C,B/C) for each Δ, and present the results using a unit of B_z that depends on C. This also means that the maximum B_z that satisfies the weak requirements cannot be pinned down in our results, as the actual value depends on C, which is unknown. Different choices of (A/C,B/C) will affect the result quantitatively but not qualitatively (the B_z=0 case of such parameter dependence has been presented in Ref. <cit.>). With such a large number of unknown parameters in the problem, our goal is neither data fitting nor precise quantitative predictions, but aiming at the expected qualitative dependence of the effective Lorenz ratio in the presence of a finite magnetic field. We present the magnetic-field-dependent result of the Lorenz ratio as a function of T in Figure <ref> and as a function of n in Figure <ref>. We observe that (1) the Wiedemann-Franz law is asymptotically satisfied for both the longitudinal and transverse component as T→ 0; (2) for the longitudinal component, the finite temperature peak is enhanced by the magnetic field, and the enhancement is larger at lower density; (3) the position of the finite temperature peak is almost independent of B_z; (4) for the transverse component, the value can be either positive or negative, depending on the parameter regime, which is expected due to the complex behavior or the electron and holes in the presence of the magnetic field. For completeness, we also present the Lorenz ratio as a function of B_z for a particular choice of parameters in the left column of Fig. <ref>. We caution however that our theory would not apply at “larger” values of B_z where strong field effects such as Landau quantization would come into play. § BILAYER GRAPHENE Near the Fermi surface, the BLG is modeled by parabolic dispersive conduction and valance bands <cit.>. As in the case of MLG, we consider the situation where a gap is opened, and use the simplest model: ε_+(𝐩) =+ |𝐩|^2/2m ε_-(𝐩) =- |𝐩|^2/2m-Δ , where Δ is the size of the gap and m≈ 0.2 eV/v_F^2 is the effective mass <cit.>. The density of states is (including the spin degeneracy g_s=2 and valley degeneracy g_v=2): D_±(ε)=2m/πħ^2 for ε>0 or ε<-Δ. We use the same scattering mechanisms for τ as in MLG, but the scattering exponents are differentbecause of the modified band structures. From the result of Ref. <cit.>, we use the following phenomenological model: τ_+(ε)=1/A+BT+C/ε, τ_-(ε)=τ_+(-Δ-ε). Here the parameters A, B, C represent the scattering strengths of short-range disorder, acoustic phonon, and long-range Coulomb disorder, respectively, as in the MLG case. We present the result of the Lorenz ratio as a function of T in Figure <ref> and as a function of n in Figure <ref>. We observe that the behavior is qualitatively similar to the case of MLG. In particular, there is a high finite-temperature peak when there is a gap but only manifests a small peak when there is no gap, as in the case of MLG found in Ref. <cit.>, and the peak becomes higher for nonzero B_z. However, the quantitative details are different from that of MLG (note that different combinations of scattering parameters can also lead to some difference in the quantitative details, so one should not compare the MLG and BLG results presented here too literally). Again, for completeness, we present the Lorenz ratio as a function of B_z for a particular choice of parameters in the right column of Fig. <ref>. § CONCLUSION Using the Boltzmann transport theory with a magnetic field, we show that the large finite-temperature peak of L_xx/L_0, observed in Ref. <cit.> and possibly (qualitatively) explained by our previous paper <cit.>, is enhanced, but not shifted much, by the presence of the magnetic field. In addition, we note that the sign of L_xy/L_0 may either be positive or negative, depending on the parameter regime. Such qualitative behaviors are the same in both MLG and BLG. Our work provides several qualitative predictions for future experimental works to verify (or falsify). Note that we do not claim that our previous paper <cit.> unambiguously explained the observation in Ref. <cit.>, and hence do not claim that the results in this paper necessarily correspond to the reality if one adds a magnetic field to the experimental setup in that paper. Our purpose is to provide a possible explanation, as all the previously attempted explanations are not very successful. We are successful to the extent that the addition of a single parameter, namely a gap, can provide an explanation for the intriguing data of Ref. <cit.>. Now, we develop the same theory in the presence of a magnetic field, providing further motivation for more experiments to clarify the physics of the MLG and BLG Wiedemann-Franz law. If future experimental results with the addition of a weak magnetic field agree qualitatively with the results here, then one may say that our explanation <cit.> is likely correct (of course, more experiments, such as deliberately inducing a gap, may also be necessary to settle down the explanation). In this case, one may then try to extract the parameters from the data, and establish a more quantitative microscopic theory for the transport in MLG as well as BLG. On the other hand, if future experimental results disagree qualitatively with the results here, then it would imply that the finite temperature peak observed in <cit.> cannot be explained just by considering bipolar diffusion and the induced gap. In that case, more theoretical works would be necessary to solve the puzzle presented in <cit.>. Such experiments to understand the temperature dependence of the Wiedemann-Franz law in graphene in the presence of a magnetic field is currently ongoing <cit.>, and hopefully, we will have a resolution of the puzzle posed by Ref. <cit.> in the near future. § ACKNOWLEDGMENT This work is supported by the Laboratory for Physical Sciences. apsrev4-2
http://arxiv.org/abs/2307.04121v1
20230709082717
A Deep Learning Framework for Solving Hyperbolic Partial Differential Equations: Part I
[ "Rajat Arora" ]
cs.LG
[ "cs.LG", "cond-mat.mtrl-sci", "cs.NA", "math.AP", "math.NA" ]
http://arxiv.org/abs/2307.05710v2
20230711182824
A Vacuum-Compatible Cylindrical Inertial Rotation Sensor with Picoradian Sensitivity
[ "M. P. Ross", "J. van Dongen", "Y. Huang", "P. Zhou", "Y. Chowdhury", "S. K. Apple", "C. M. Mow-Lowry", "A. L. Mitchell", "N. A. Holland", "B. Lantz", "E. Bonilla", "A. Engl", "A. Pele", "D. Griffith", "E. Sanchez", "E. A. Shaw", "C. Gettings", "J. H. Gundlach" ]
physics.ins-det
[ "physics.ins-det" ]
arabic Center for Experimental Nuclear Physics and Astrophysics, University of Washington, Seattle, Washington 98195, USA Vrije Universiteit Amsterdam, 1081 HV Amsterdam, Netherlands Dutch National Institute for Subatomic Physics, Nikhef, 1098 XG, Amsterdam, Netherlands Center for Experimental Nuclear Physics and Astrophysics, University of Washington, Seattle, Washington 98195, USA Vrije Universiteit Amsterdam, 1081 HV Amsterdam, Netherlands Dutch National Institute for Subatomic Physics, Nikhef, 1098 XG, Amsterdam, Netherlands Stanford Univserity, Stanford, CA 94305 California Institute of Technology, Pasadena, CA, 91125, USA Center for Experimental Nuclear Physics and Astrophysics, University of Washington, Seattle, Washington 98195, USA We describe an inertial rotation sensor with a 30-cm cylindrical proof-mass suspended from a pair of 14-µm thick BeCu flexures. The angle between the proof-mass and support structure is measured with a pair of homodyne interferometers which achieve a noise level of ∼ 5 prad/√(Hz). The sensor is entirely made of vacuum compatible materials and the center of mass can be adjusted remotely. A Vacuum-Compatible Cylindrical Inertial Rotation Sensor with Picoradian Sensitivity J. H. Gundlach August 12, 2023 ==================================================================================== § INTRODUCTION Sensing minute rotations has long drawn interest from a variety of scientific fields. Recently, rotation sensors with sub-nrad sensitivities have been pursued to improve the seismic isolation systems of gravitational wave observatories <cit.> and to allow novel measurements of the rotational component of seismic waves <cit.>. Multiple devices now reach this sensitivity including ring-laser gyros <cit.> and flexure-based inertial rotation sensors <cit.>. Many of these devices are large (meter-scale) and must be maintained with human intervention making them inadequate for certain applications. Here we describe the Cylindrical Rotation Sensor (CRS), a 30-cm scale inertial rotation sensor which reaches a sensitivity of ∼5 prad/√(Hz) at 1.5 Hz. This design continues our previous sensor development <cit.> and shares many qualities with prior designs. The sensor is made of low-outgassing ultra-high-vacuum compatible materials and can be operated and centered remotely. We designed this sensor to improve the rotational seismic isolation performance of gravitational wave observatories. However, we expect the CRS to be applicable in a wide range of research projects, particularly in rotational seismology. § MECHANICS The core mechanism of the CRS is a 30-cm diameter, 5.4-kg aluminum cylindrical proof-mass with a moment of inertia of 0.094 kg-m^2 suspended from a pair of 14-µm thick BeCu flexures. The center of mass is tuned to be < 22 nm from the pivot point of the flexures corresponding to a translational rejections <cit.> of < 1.3 µrad/m. This causes the system to behave as a simple rotational spring-mass system with a resonant frequency of 17 mHz. The proof-mass then acts as an inertial reference above this resonant frequency. The working principle is described by the cartoon shown in Figure <ref>. The angle between the support structure and the proof-mass is measured using a pair of homodyne interferometers <cit.>, see Section <ref>. As the proof-mass is inertially isolated from motion of the support-structure, angle changes sensed by the readout represent support-structure motion about the axis that runs through the center of the flexures. This allows the device to sense 1-D horizontal angular motion of the surface the sensor is attached to. A detailed description of the dynamics of flexure-based inertial rotation sensors can be found in <cit.>. Figure <ref> shows a picture of the CRS prototype. The proof-mass was machined out of a single monolithic piece of aluminum to maximize thermal uniformity. A seat structure is attached near the center of the cross to which the lower halves of the flexures are mounted with a pair of clamps on either side of the proof-mass. The upper halves of the flexures are mounted to the support structure using similar clamping. The support structure is made of aluminum and is primarily formed by a pair of legs on either side of the proof-mass. These connect through the upper quadrants of the proof-mass. This design increases the stability of the structure. Additionally, the structural pieces are significantly oversized to maximize thermal mass and minimize the impact of high-frequency vibrations. § READOUT To significantly improve the rotational performance of gravitational wave observatories, the sensor must outperform the rotational performance of a pair of broadband seismometers located 1-m apart. To meet this requirement, we installed two homodyne interferometers (detailed in <cit.>) on opposite sides of the proof-mass, shown in Figure <ref>. These deploy a variety of polarization optics to measure multiple phases of the interference pattern produced by a Michelson interferometer. One arm of the interferometer was formed by a mirror attached to the proof-mass, allowing for the distance between the optics and the proof-mass to be measured. The interferometers shared a common laser source (RIO ORION 1064 nm) coupled into the vacuum chamber via fiber optics and split by an in-vacuum fiber splitter. This increased common-mode noise subtraction. Deploying two interferometers sensing mirrors on either end of the proof-mass allows for the extraction of the angle via: θ = x_1-x_2/2 r where x_1 and x_2 are the distance change sensed by the interferometers and r= 15.24 cm is the radius from the flexures to each mirror. The differential measurment allows any common noise between the interferometers to be subtracted from the signal of interest. Namely, the frequency noise of the laser that illuminates both interferometers can be minimized. § REMOTE CENTERING As the CRS was designed to be installed inside the vacuum chambers of gravitational wave observatories, it needed to be operated remotely for long durations. Once suspended, the equilibrium angle of the proof-mass can drift over time due to various physical mechanisms, such as changes in ambient temperature and relaxation of internal stresses. The drifts in the equilibrium angle can drive the proof-mass outside the range of the interferometer and even cause it to rest on its mechanical stops. To counter this drift, we can shift the proof-mass's horizontal center-of-mass using the sensor's remote mass adjuster. This process is temporarily disruptive to the sensor's performance yet is only needed occasionally. Some commercial broadband seismometers have a similar centering mechanism. The remote mass adjuster consists of a 1-gram brass mass attached to a 0-90 screw that is allowed to rotate but is held in place by a BeCu leaf spring. One edge of the mass is in contact with a flat which allows the mass to be precisely translated by rotating the screw. This assembly is installed on the cross of the proof-mass. Details of the centering mechanism can be found in <cit.>. Running wires to the proof-mass to power a motor would be significantly stiffer than the flexures and ruin the performance of the sensor. To alleviate this issue, when adjustment is needed, a motor attached to the support structure turns the screw. A set of claws with intentionally large backlash couple the motor to the adjuster. This coupling allows the motor to rotate the adjuster while making contact, then back rotate to mechanically decouple. Once decoupled, the sensor returns to its previous dynamics with a shifted equilibrium angle. § NOISE PERFORMANCE The CRS was tested in a bell-jar vacuum chamber housed in a defunct-cyclotron cave at the Center for Experimental Nuclear Physics and Astrophysics on the campus of University of Washington. The cave provided thermal stability but the location had a high level of seismic activity as it was on an urban campus near multiple high-traffic roads. To assess the intrinsic noise of the instrument, we calculated the residuals of a coherent subtraction between the two readouts. This subtraction is conducted with the mccs2 algorithm <cit.>, which removes the coherent part of two signals to leave only the incoherent noise. For the CRS, this represents the combined noise contribution of the two readouts and is plotted in Figure <ref> along with the observed angle. We found that ambient seismic motion was coupling into the readout noise measurements through vibrations of the fiber optics. To assess the expected performance of the sensor in a quiet seismic environment (i.e. a seismic isolation platform), we attached a MBB-2 <cit.> three-axis seismometer to the vacuum chamber. The three seismometer channels were added into the coherent subtraction to remove this spurious coupling from the readout noise estimations. The readout noise with and without this additional subtraction is shown in Figure <ref> along with the vertical axis of the seismometer. The seismometer subtraction removes excess noise mainly above 1 Hz and at the microseism (0.2 Hz). The residual readout noise reaches a maximum sensitivity of ∼ 5 prad/√(Hz) at 1.5 Hz. Readout noise is not the only noise source that can limit inertial rotation sensors. Any effect that changes the angle of the proof-mass is indistinguishable from rotations of the platform. Accurately assessing these contributions is difficult with a single sensor. However, some fundamental noise sources can be calculated from first principles. The residual pressure of the current vacuum chamber can only reach ∼70 µ Torr. Thus, damping due to residual pressure dominates the mechanical loss of the sensor. Figure <ref> shows the residual pressure damping noise <cit.> calculated for an observed quality factor of 294. The damping noise limits the performance of the sensor below 0.2 Hz with the readout dominating above that. We believe the sensor noise is well represented above 0.1 Hz with the combination of damping and readout noise shown in Figure <ref>. The observed angle in this frequency band is then the angular component of the ambient seismic wavefield with the peak at ∼ 0.2 Hz being the oceanic microseism and the rise above 1 Hz being anthropogenically sourced. Also shown in Figure <ref> is the final noise goal of the instrument. With improvements to the vacuum, the sensor will be limited by internal losses instead of external damping. This is expected to improve the observed quality factor from 294 to > 1000. Additionally, we expect further reduction in the readout noise when the sensor is deployed in a seismically quiet environment. § CONCLUSION We have constructed an interferometrically readout inertial rotation sensor with a cylindrical proof-mass, which achieves < nrad/√(Hz) noise above 35 mHz and reaches a maximum sensitivity of ∼ 5 prad/√(Hz) at 1.5 Hz. This sensor is vacuum compatible and allows for remote mass centering. We plan further sensitivity improvements in the near future. With these, the sensor is expected to have a three-fold enhancement in sensitivity as compared to the current prototype. Similar sensors will soon be installed at the LIGO gravitational-wave observatories which will significantly improve the observatories' seismic isolation. Additionally, the sensor's applications to seismology are actively being explored. § DATA AND SCHEMATICS Schematics of the CRS can be found at <https://github.com/mpross/CRS-Schematics>. Code and data to generate the plots shown here can be found at <https://github.com/mpross/CRS-Analysis>. § ACKNOWLEDGEMENTS Participation from the University of Washington, Seattle, was supported by funding from the NSF under Awards PHY-1607385, PHY-1607391, PHY-1912380, and PHY-1912514. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 865816).
http://arxiv.org/abs/2307.05597v1
20230710193937
Phase transitions in systems of particles with only hard-core interactions
[ "Deepak Dhar", "R. Rajesh", "Aanjaneya Kumar" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
http://arxiv.org/abs/2307.04703v1
20230710170154
Coexistence of self-similar and anomalous scalings in turbulent small-scale solar magnetic fields
[ "Gorobets Andrei Y.", "Berdyugina Svetlana V" ]
physics.flu-dyn
[ "physics.flu-dyn", "astro-ph.SR", "physics.plasm-ph" ]
Leibniz-Institut für Sonnenphysik (KIS), Schöneckstr. 6, Freiburg 79104, Germany Leibniz-Institut für Sonnenphysik (KIS), Schöneckstr. 6, Freiburg 79104, Germany Istituto ricerche solari Aldo e Cele Daccò (IRSOL), Faculty of Informatics, Università della Svizzera italiana, 6605 Locarno, Switzerland Coexistence of self-similar and anomalous scalings in turbulent small-scale solar magnetic fields. Svetlana V. Berdyugina August 12, 2023 ================================================================================================== We report an evidence that self-similarity and anomalous scalings coexist in a turbulent medium, particularly in fluctuations of the magnetic field flux density in magnetized plasma of the solar photosphere. The structure function scaling exponents in the inertial range have been analyzed for fluctuations grouped according to the sign of the path-dependent stochastic entropy production. It is found that the scaling exponents for fluctuations with the positive entropy production follow the phenological linear dependence for the magnetohydrodynamic turbulence. For fluctuations with the negative entropy production, the scaling is anomalous. In the lower solar atmosphere (photosphere), the evolution of magnetic fields is influenced by turbulent magnetoconvective motions of plasma, especially in regions with weak fields (≤ 0.1Mx m^-2) of the so-called "quiet Sun", i.e. away from pores, sunspots, and their groups (active regions), where stronger magnetic fields suppress convective motions. The quiet Sun line-of-sight magnetic flux density (MFD) is observed as a rapidly evolving, spatially intermittent (fractal) quantity in magnetic field maps (magnetograms) <cit.>. Photospheric magnetograms (Fig. <ref>) are recorded by space missions with a high cadence during several 11-year solar cycles. The range of physical parameters in the solar atmosphere provides a unique laboratory for unprecedented continuous high spatial resolution studies of dynamic magnetic phenomena <cit.>. In this Letter, we report a first empirical evidence for a dual character of the scaling law in temporal fluctuations of (t) when their statistical realizations are analysed separately according to the sign of the stochastic entropy production. We employ an uninterrupted observation of the quiet Sun at the solar disk center obtained by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) space mission <cit.>. The analyzed time-series consists of 51,782 magnetograms in the Fe I 617.3nm line from 2019 December 11, 00:00:22 UT to 2020 January 06, 23:58:07 UT, with the instrument-fixed cadence =45 s. This is exactly 27days, which is somewhat longer than one synodic rotation period of 26.24 days. The magnetogram series is considered pixel-wise as discrete, time-ordered snapshots of magnetic flux evolution in the Eulerian frame of reference. In this context, every pixel as a probe in the field of view (FoV) provides a finite-length random realization of MFD fluctuations (also called trajectory or path) (t) :={(t_1),(t_1+), …,(t_1+n) } ={b_1,b_2, …, b_n }=, t∈[1,n], where t is the local time index starting at the local origin t_1, n is the length of the trajectory. The trajectory is a set of identically distributed, signed, non-Gaussian, random variables; sign of b_t designates polarity of (t) at a given time instance, and n is the exponentially distributed random number. At a given pixel, the total number of trajectories is arbitrary. It depends on: the overall observation time, a particular solar magnetic field topology within FoV, and the noise cutoff. Statistical properties of trajectories are assumed to be homogeneous in space for the quiet Sun, at least with the HMI spatio-temporal resolution [The empirical test of Markov property at a higher resolution in <cit.> revealed that granular and intergranular had, to some extent, different statistical properties, which were neglected at that stage of the studies. More details of the relevant discrepancies were reported in <cit.>.]. Hence, trajectories of different pixels contribute to the overall statistics equally. The nature of fluctuations enables analysis of fluctuations including a measure of their irreversibility. Namely, -transitions in obey Markov property <cit.>, and so allow computing trajectory-dependent (total) stochastic entropy production ()= ln[p_n(b_1,b_2,⋯,b_n)/p_n(b_n,⋯, b_2,b_1)] = ln[p(b_1)/p(b_n)∏_k=1^n-1p(b_k+1|b_k)/p(b_k|b_k+1)], where p, p_n and p(b_j|b_i) are respectively the marginal, n-joint and -step conditional probability density functions (PDF). The random quantity is the measure of irreversibility of the trajectory, and its PDF has an exact symmetry relation, known as the detailed fluctuation theorem [For introduction and review see, for example: <cit.>]: p(>0)/p(<0) = e^||. That is, the total entropy consumption, ^-≡<0, is exactly exponentially less probable than the total entropy generation, ^+≡>0, of the same magnitude ||. Hereafter, the corresponding signs are placed as superscripts in notations of estimated quantities. The detailed pixel calculus and Markov property test for at a higher spatial resolution are described in <cit.>. For HMI , properties of the regular Markov chains were considered in <cit.>, and the validity of the fluctuation theorems (including Eq. (<ref>)) was shown in <cit.>. Henceforth, in our investigation of scale invariance of (t) fluctuations due to turbulent origin, we take into account the sign of , which defines two disjoint sets ^±. The conventional method of studying manifestations of scale invariance involves an analysis of signal's self-similarity in terms of the q-order structure functions (SF) S_q(ℓ)≡⟨|δ_ℓ(t)|^q⟩=⟨|(t+ℓ)- (t)|^q⟩, where δ_ℓ (·) is an increment of a turbulent quantity at two points of the flow at a distance ℓ. The Taylor's "frozen turbulence" hypothesis connects temporal and spatial scales in measurements, so scales in Eq.(<ref>) are used in units of spatial distance. The solar data we investigate do not resolve all vector components of the observable/inferred quantities like photospheric velocity and magnetic fields, and consequently details of real flows are quite uncertain. However, we assume that Taylor's hypothesis is applicable for MFD of the quiet Sun <cit.>. For the set of 1D trajectories of a finite length, SF are computed as the ensemble average, and ℓ is expressed in units of the sampling interval . The phenomelogical theory of turbulence establishes fundamental scaling relations for observable quantities, and hence defines power-law dependencies between SF. The Kolmogorov phenomenology <cit.> of the fully developed hydrodynamic (HD) turbulence at a high Reynolds number R=vℓ_0/ν predicts the scaling law in the inertial range λ≪ℓ≪ℓ_0: δ_ℓ v ∼^1/3ℓ^1/3, where v is the velocity, is the average energy dissipation rate, ν is the viscosity, and ℓ_0 and λ are the integral and dissipation scales, respectively. Turbulence of a magnetized plasma is described in the framework of magnetohydrodynamics (MHD). The corresponding Iroshnikov-Kraichnan phenomenology <cit.> includes the Alfvén wave effect of coupling between velocity and magnetic field fluctuations on small-scales by the integral-scale magnetic field B_0 <cit.>. At a high magnetic Reynolds number Rm = v_Al_0/η, the self-similar scaling exponents are δ_ℓ v ∼δ_ℓ B ∼ [ v_A]^1/4ℓ^1/4, where η is the magnetic diffusivity, v_A≡ B_0(4πρ)^-1/2 is the Alfvén velocity in B_0, ρ is the mass density, and ℓ_0 =v_A^3^-1. In terms of SF, the self-similar (linear) scalings in Eqs. (<ref>-<ref>) read S_q(ℓ) ∼ℓ^ξ(q), ξ(q)=q/m, with m=3 for HD and m=4 for MHD turbulence. To cope with experimental limitations and irregularities of flows which hinder the analysis of scaling in S_q(ℓ), the concept of the Extended Self-Similarity (ESS) was proposed in Refs. <cit.>. In essence, ESS is a set of the functional dependencies of SF of any order on SF of the order for which ξ(q)=1. Hence, for the case of MHD turbulence we focus on ESS with the relative exponents ξ_4 S_q(ℓ) ∼[S_4(ℓ)]^ξ_4(q), ξ_4(q)=ξ(q)/ξ(4). The linear scalings in Eq. (<ref>) are violated by spatial inhomogeneities of the dissipation on small scales, as said by intermittency. Thus, the scaling exponents (anomalously) deviate from the exact linear relations, as has become evident from extensive experimental and numerical studies <cit.>. Models for intermittency differ by assumptions about statistical properties of the energy dissipation rate , such as log-normal <cit.>, multifractal <cit.>, and log-Poisson <cit.>. The latter was revealed for the solar wind MHD turbulence <cit.> and applied for photospheric flows <cit.>. The "standard model" of Ref.<cit.> as the non-parametric version of the log-Poisson model for MHD turbulence ξ_4(q)=q/8+1-(1/2)^q/4 is used as a reference for anomalous scaling in the results presented below. In Fig. <ref>, the SF scalings are shown according to Eq. (<ref>) being computed separately for two sets ^±. The discrepancy in slopes with respect to sign of is clearly seen, especially for higher orders. Following ideas from Ref. <cit.>, the inertial range is defined as the range in which Kolmogorov's 4/5 law S_3(ℓ)=-4/5ℓ holds. For our data, we found the inertial range to be from 15 to 19. The range boundaries were modified by ±, to compensate for a rather coarse sampling rate , because linear fits showed substantial variations with range boundaries. This modification also helps to improve statistics of fits. Therefore, an SF scaling (Eq. <ref>) in the inertial range is estimated by the set of independent linear fits within the extended inertial range [15±,19±]. The ultimate value of the scaling exponent ξ_4 is then computed as the weighted mean of 9 exponents for every combination of the inertial range boundary variations given by (0,± 1). This procedure was applied to three groups of fluctuations: ^± and their joint data set. The result is shown in Fig. <ref>. Statistical robustness of the result is highlighted by the 99,99% confidence level computed by the χ^2 minimization. Errors of the means are smaller than symbols and not shown. Summarizing, an anomalous scaling is the intrinsic property of the MFD fluctuations in the quiet Sun (diamonds in Fig. <ref>). The main results is the statistically significant difference between ξ^+(q) and ξ^-(q). The former exhibits scaling exponents rather distinctly following the linear dependence q/4, in accordance with the Iroshnikov-Kraichnan phenomenology. Contrastly, fluctuations along ^--trajectories have anomalous scaling exponents, and the curve of ξ^-(q) resembles the MHD log-Poisson model (Eq. <ref>). However, we note that models describing curves of ξ(q)^- and ξ(q) are out of the scope of the present Letter. Following the arguments of She and Leveque <cit.>, one can interpret our finding that entropy consuming fluctuations could be related to entropy (energy) sinks which support building up of coherent structures at larger scales due to correlations induced by intermittency. Correspondingly, entropy generating fluctuations are related to dissipation processes according to the phenomenological cascade model. To conclude, splitting measurements according to the sign of the entropy production allows detecting an unexpected coexistence of self-similar and anomalous scalings in the inertial range of turbulent small-scale photospheric magnetic fields on the Sun. Future numerical and experimental/observational applications of the method proposed in this Letter may advance understanding of the self-similarity in turbulent phenomena. We thank Petri Käapylä for stimulating discussions. Solar Dynamics Observatory (SDO) is a mission for NASA's Living With a Star (LWS) program. The Helioseismic and Magnetic Imager (HMI) data were provided by the Joint Science Operation Center (JSOC). 10 benziExtendedSelfSimilarityDissipation1993 R. Benzi, S. Ciliberto, C. Baudet, G. Ruiz Chavarria, and R. Tripiccione. Extended Self-Similarity in the Dissipation Range of Fully Developed Turbulence. Europhysics Letters, 24(4):275, November 1993. benziExtendedSelfsimilarityTurbulent1993 R. Benzi, S. Ciliberto, R. Tripiccione, C. Baudet, F. Massaioli, and S. Succi. Extended self-similarity in turbulent flows. Physical Review E, 48(1):R29–R32, July 1993. biskampCascadeModelsMagnetohydrodynamic1994 D. Biskamp. Cascade models for magnetohydrodynamic turbulence. Physical Review E, 50(4):2702–2711, October 1994. bustamanteNonequilibriumThermodynamicsSmall2005 Carlos Bustamante, Jan Liphardt, and Felix Ritort. The nonequilibrium thermodynamics of small systems. Physics Today, 58(7):43–48, 2005. consoliniCharacterizationSolarPhotospheric1999 G. Consolini, F. Berrilli, E. Pietropaolo, R. Bruno, V. Carbone, B. Bavassano, and G. Ceppatelli. Characterization of the Solar Photospheric Velocity Field: A New Approach. In Magnetic Fields and Solar Processes, volume 448 of ESA Special Publication, page 209, December 1999. consoliniScalingBehaviorVertical1999 G. Consolini, V. Carbone, F. Berrilli, R. Bruno, B. Bavassano, C. Briand, B. Caccin, G. Ceppatelli, A. Egidi, I. Ermolli, A. Florio, G. Mainella, and E. Pietropaolo. Scaling behavior of the vertical velocity field in the solar photosphere. Astronomy and Astrophysics, 344:L33–L36, April 1999. faurobert-schollTurbulentMagneticFields1995 M. Faurobert-Scholl, N. Feautrier, F. Machefert, K. Petrovay, and A. Spielfiedel. Turbulent magnetic fields in the solar photosphere: Diagnostics and interpretation. Astronomy and Astrophysics, 298:289, June 1995. frischTurbulence1995 Uriel Frisch. Turbulence. 1995. giannattasioScalingPropertiesMagnetic2022 F. Giannattasio, G. Consolini, F. Berrilli, and P. De Michelis. Scaling properties of magnetic field fluctuations in the quiet Sun. Astronomy & Astrophysics, 659:a180, 2022. gorobetsStochasticEntropyProduction2019 A. Y. Gorobets and S. V. Berdyugina. Stochastic entropy production in the quiet Sun magnetic fields. Monthly Notices of the Royal Astronomical Society: Letters, 483(1):L69–L74, February 2019. gorobetsMaximumEntropyLimit2017 A. Y. Gorobets, S. V. Berdyugina, T. L. Riethmüller, J. Blanco Rodríguez, S. K. Solanki, P. Barthol, A. Gandorfer, L. Gizon, J. Hirzberger, M. noortvan Noort, J. C. Del Toro Iniesta, D. Orozco Suárez, W. Schmidt, V. Martínez Pillet, and M. Knölker. The Maximum Entropy Limit of Small-scale Magnetic Field Fluctuations in the Quiet Sun. The Astrophysical Journal Supplement Series, 233(1):5, 2017. gorobetsMARKOVPROPERTIESMAGNETIC2016 A. Y. Gorobets, J. M. Borrero, and S. Berdyugina. Markov Properties of The Magnetic Field in The Quiet Solar Photosphere. The Astrophysical Journal, 825(2):L18, July 2016. grauerScalingHighorderStructure1994 R. Grauer, J. Krug, and C. Marliani. Scaling of high-order structure functions in magnetohydrodynamic turbulence. Physics Letters A, 195(5):335–338, December 1994. guerraSpatioTemporalScalingTurbulent2015 J. A. Guerra, A. Pulkkinen, V. M. Uritsky, and S. Yashiro. Spatio-Temporal Scaling of Turbulent Photospheric Line-of-Sight Magnetic Field in Active Region NOAA 11158. Solar Physics, 290(2):335–350, 2015. harrisFluctuationTheoremsStochastic2007 R. J. Harris and G. M. Schütz. Fluctuation theorems for stochastic dynamics. Journal of Statistical Mechanics: Theory and Experiment, 2007(07):P07020–P07020, July 2007. iroshnikovTurbulenceConductingFluid1964 P. S. Iroshnikov. Turbulence of a Conducting Fluid in a Strong Magnetic Field. Soviet Astronomy, 7:566, February 1964. janssenFractalDimensionSmallscale2003 K. Janßen, A. Vögler, and F. Kneer. On the fractal dimension of small-scale magnetic structures in the Sun. Astronomy & Astrophysics, 409(3):1127–1134, October 2003. jarzynskiEqualitiesInequalitiesIrreversibility2011 Christopher Jarzynski. Equalities and Inequalities: Irreversibility and the Second Law of Thermodynamics at the Nanoscale. Annual Review of Condensed Matter Physics, 2(1):329–351, March 2011. klagesNonequilibriumStatisticalPhysics2013 Rainer Klages, W. Just, and Christopher Jarzynski, editors. Nonequilibrium Statistical Physics of Small Systems: Fluctuation Relations and Beyond. Reviews of Nonlinear Dynamics and Complexity. Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, Germany, 2013. kolmogorovRefinementPreviousHypotheses1962 A. N. Kolmogorov. A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number. Journal of Fluid Mechanics, 13(1):82–85, 1962. K41 A. N. Kolmogorov. Dokl. Akad. Nauk SSSR 31, 538 (1941) [Proc. R. Soc. London A 434, 15 (1991)]. kraichnanInertialRangeSpectrumHydromagnetic1965 Robert H. Kraichnan. Inertial-Range Spectrum of Hydromagnetic Turbulence. Physics of Fluids, 8:1385–1387, July 1965. liuComparisonLineofSightMagnetograms2012 Y. Liu, J. T. Hoeksema, P. H. Scherrer, J. Schou, S. Couvidat, R. I. Bush, T. L. Duvall, K. Hayashi, X. Sun, and X. Zhao. Comparison of Line-of-Sight Magnetograms Taken by the Solar Dynamics Observatory/Helioseismic and Magnetic Imager and Solar and Heliospheric Observatory/Michelson Doppler Imager. Solar Physics, 279(1):295–316, July 2012. marconiFluctuationDissipationResponse2008 Umberto Marini Bettolo Marconi, Andrea Puglisi, Lamberto Rondoni, and Angelo Vulpiani. Fluctuation–dissipation: Response theory in statistical physics. Physics Reports, 461(4):111–195, June 2008. meneveauSimpleMultifractalCascade1987 C. Meneveau and K. R. Sreenivasan. Simple multifractal cascade model for fully developed turbulence. Physical Review Letters, 59(13):1424–1427, 1987. politanoModelIntermittencyMagnetohydrodynamic1995 H. Politano and A. Pouquet. Model of intermittency in magnetohydrodynamic turbulence. Physical Review E, 52(1):636–641, July 1995. rinconSunSupergranulation2018a François Rincon and Michel Rieutord. The Sun's supergranulation. Living Reviews in Solar Physics, 15(1):6, 2018. schekochihinMHDTurbulenceBiased2022 Alexander A. Schekochihin. MHD turbulence: A biased review. Journal of Plasma Physics, 88(5):155880501, October 2022. scherrerHelioseismicMagneticImager2012 P. H. Scherrer, J. Schou, R. I. Bush, A. G. Kosovichev, R. S. Bogart, J. T. Hoeksema, Y. Liu, T. L. Duvall, J. Zhao, A. M. Title, C. J. Schrijver, T. D. Tarbell, and S. Tomczyk. The Helioseismic and Magnetic Imager (HMI) Investigation for the Solar Dynamics Observatory (SDO). Solar Physics, 275:207–227, January 2012. schouDesignGroundCalibration2012 J. Schou, P. H. Scherrer, R. I. Bush, R. Wachter, S. Couvidat, M. C. Rabello-Soares, R. S. Bogart, J. T. Hoeksema, Y. Liu, T. L. Duvall, D. J. Akin, B. A. Allard, J. W. Miles, R. Rairden, R. A. Shine, T. D. Tarbell, A. M. Title, C. J. Wolfson, D. F. Elmore, A. A. Norton, and S. Tomczyk. Design and Ground Calibration of the Helioseismic and Magnetic Imager (HMI) Instrument on the Solar Dynamics Observatory (SDO). Solar Physics, 275(1-2):229–259, January 2012. schumacherColloquiumUnusualDynamics2020 Jörg Schumacher and Katepalli R. Sreenivasan. Colloquium: Unusual dynamics of convection in the Sun. Reviews of Modern Physics, 92:041001, October 2020. seifertStochasticThermodynamicsFluctuation2012 Udo Seifert. Stochastic thermodynamics, fluctuation theorems and molecular machines. Reports on Progress in Physics, 75(12):126001, December 2012. seifertStochasticThermodynamicsThermodynamic2019 Udo Seifert. From stochastic thermodynamics to thermodynamic inference. Annual Review of Condensed Matter Physics, 10(1):171–192, March 2019. sheHierarchicalStructuresScalings1997 Zhen-Su She. Hierarchical structures and scalings in turbulence. In Oluş Boratav, Alp Eden, and Ayse Erzan, editors, Turbulence Modeling and Vortex Dynamics, Lecture Notes in Physics, pages 28–52, Berlin, Heidelberg, 1997. Springer. sheUniversalScalingLaws1994 Zhen-Su She and Emmanuel Leveque. Universal scaling laws in fully developed turbulence. Physical Review Letters, 72(3):336–339, 1994. stenfloScalingLawsMagnetic2012 J. O. Stenflo. Scaling laws for magnetic fields on the quiet Sun. Astronomy and Astrophysics, 541:A17, 2012. stolovitzkyKolmogorovRefinedSimilarity1992 G. Stolovitzky, P. Kailasnath, and K. R. Sreenivasan. Kolmogorov's refined similarity hypotheses. Physical Review Letters, 69(8):1178–1181, 1992.
http://arxiv.org/abs/2307.04324v1
20230710033812
Study of the $B^-\to K^-ηη_c$ decay due to the $D\bar{D}$ bound state
[ "Xin-Qiang Li", "Li-Juan Liu", "En Wang", "Le-Le Wei" ]
hep-ph
[ "hep-ph" ]
[email protected] Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei 430079, China Center for High Energy Physics, Peking University, Beijing 100871, China [email protected] School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, Henan 450001, China [email protected] School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, Henan 450001, China [email protected] Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei 430079, China We study the B^- → K^- ηη_c decay by taking into account the S-wave contributions from the pseudoscalar meson–pseudoscalar meson interactions within the unitary coupled-channel approach, where the DD̅ bound state is dynamically generated. In addition, the contribution from the intermediate resonance K_0^*(1430), with K_0^*(1430)→ K^-η, is also considered. Our results show that there is a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which could be associated with the D D̅ bound state. The future precise measurements of the B^- → K^- ηη_c process at the Belle II and LHCb experiments could be, therefore, used to check the existence of the D D̅ bound state, and to deepen our understanding of the hadron-hadron interactions. Study of the B^- → K^- ηη_c decay due to the DD̅ bound state Le-Le Wei ============================================================ § INTRODUCTION Since the discovery of X(3872) by the Belle Collaboration in 2003 <cit.>, many exotic states, which do not fit into the expectations of the conventional quark models, have been observed experimentally during the past two decades <cit.>. Many of these exotic states, especially the ones observed in the charmonium sector, are observed around the threshold of a pair of heavy hadrons; some of them, such as X(3872) <cit.>, Z_c(3900) <cit.> and X(4160) <cit.>, can be explained as the hadronic molecules. However, the hadronic molecular states with mass near the D D̅ threshold have not yet been observed experimentally, and further detailed studies are therefore required both theoretically and experimentally <cit.>. In Ref. <cit.>, by taking into account the ππ, K K̅, D D̅, D_s D̅_s, ηη, and ηη_c coupled channels, the authors predicted a narrow hidden charm resonance with quantum numbers I(J^PC)=0(0^++) and mass around 3700 MeV [denoted as X(3700) throughout this paper] within the unitary coupled-channel approach. Furthermore, by considering the η_c as a pure c c̅ state and the η–η^' mixing, together with the same parameters as used in Ref. <cit.>, the pole of the new X(3700) state was predicted to be √(s)=(3722-i18) MeV within the unitary coupled-channel approach <cit.>. The mass of the D D̅ bound state predicted by other different models is also basically around the threshold of D D̅ <cit.>, and the theoretical studies of the experimental measurements of the processes e^+ e^- → J/ψ D D̅ <cit.>, B^+ → D^0 D̅^0 K^+ <cit.> and γγ→ D D̅ <cit.> all support the existence of such a D D̅ bound state. Meanwhile, some processes have also been suggested to search for the D D̅ bound state, such as ψ(3770) →γ X(3700) →γηη^', ψ(4040) →γ X(3700) →γηη^', e^+ e^- → J/ψ X(3700) → J/ψηη^' <cit.>, ψ(3770) →γ D D̅ <cit.>, and Λ_b →Λ D D̅ <cit.>. It is worth mentioning that the BESIII Collaboration has recently searched for the X(3700) in the ψ(3770) →γηη^' decay for the first time, observing however no significant signals due to the low detection efficiencies of the photons <cit.>. Although the DD̅ bound state X(3700) couples mainly to the D D̅ and D_s D̅_s channels, it is not easy to search for any signals of the state in these systems. This is due to the fact that, since its mass is a little bit lower than the D D̅ threshold, the X(3700) state would manifest itself as a near-threshold enhancement in the D D̅ invariant mass distributions, which may be difficult to identify due to the low detection efficiencies near the threshold. On the other hand, the X(3700) state has also a sizeable coupling to the ηη_c channel, as observed in Refs. <cit.>. Since the ηη_c threshold is about 200 MeV lower than the predicted mass of X(3700), one expects that, if the D D̅ bound state exists, a clear peak near the D D̅ threshold would appear in the ηη_c invariant mass distributions of some processes with large phase space. As is well known, the three-body weak decays of the B mesons involve more complicated dynamics than the two-body decays and can, therefore, provide a wealth of information about the meson-meson interactions and hadron resonances <cit.> (see e.g. Ref. <cit.> for a recent review). For instance, the B → K + X/Y/Z decay is an ideal process to produce the charmoniumlike hadronic molecular states <cit.>, and many exotic states have been observed experimentally through the B-meson weak decays during the past few years, such as Z_cs(4000) and Z_cs(4220)  <cit.>, X(4140) <cit.> in B^+ → J/ψϕ K^+, as well as X_0(2900) and X_1(2900) in B^+ → D^+ D^- K^+ decay <cit.>. In this paper, we propose to search for the D D̅ bound state X(3700) in the B^- → K^- ηη_c decay. It is worth mentioning that the Belle Collaboration has already searched for the process in 2015 based on 772×10^6 BB̅ pairs collected at the Υ(4S) resonance <cit.>, and no significant signal of the D D̅ bound state was observed due to insufficient statistics. However, the Belle II Collaboration will accumulate about 50 times the Belle data set <cit.>, and is expected to make the further precise measurements of the B^- → K^- ηη_c decay, which will shed more light on the existence of the D D̅ bound state in this process. In addition, the authors of Ref. <cit.> have suggested to search for the D D̅ bound state in the ηη_c mass distribution of the B^+ → K^+ ηη_c decay, and predicted a branching ratio of ℬ(B^+ → ( X_q q̅→η_c η ) K^+ )= ( 0.9 ∼ 6.7) × 10^-4. In this paper, motivated by the observations made above, we study the B^- → K^- ηη_c decay by taking into account the pseudoscalar meson–pseudoscalar interactions within the chiral unitary approach, where the DD̅ bound state is generated dynamically. On the other hand, the B^- → K^- ηη_c decay can also proceed through the subsequent decay of the intermediate resonance K^*_0(1430), i.e. K^*_0(1430) → K η, whose contribution will be considered in this paper too. We will demonstrate that, besides a peak of K_0^*(1430) in the K^-η invariant mass distribution, there is a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which could be associated with the D D̅ bound state. Therefore, future precise measurements of the B^- → K^- ηη_c decay at the Belle II and LHCb experiments could be used to check the existence of the D D̅ bound state, and to deepen our understanding of the hadron-hadron interactions. This paper is organized as follows. In Sec. <ref>, we will firstly introduce our formalism for the B^- → K^- ηη_c decay. Our numerical results and discussions are then presented in Sec. <ref>. In Sec. <ref>, we give our final conclusion. § FORMALISM In analogy to the discussions made in Refs. <cit.>, the B^- → K^- ηη_c decay proceeds via the following three steps: the weak decay, the hadronization and the final state interactions. Explicitly, the b quark of the B^- meson firstly decays into a c quark and a W^- boson, and then the W^- boson turns into a c̅ s pair. In order to give rise to the K^- ηη_c final state, the u̅ antiquark of the initial B^- meson and the c̅ s pair from the W^- subsequent decay have to hadronize together with the q̅ q (≡u̅ u + d̅ d + s̅ s) created from the vacuum with the quantum numbers J^PC=0^++. The relevant quark level diagrams can be classified as the internal W^- emission mechanisms and external W^- emission mechanisms, as depicted in Figs. <ref>(a)–(b) and <ref>(c)–(d), respectively. Here we have neglected all the CKM suppressed diagrams that are proportional to the CKM element V_ub. The meson-meson systems formed by the hadronization of q_i, q̅_j and q̅_k q_k are given by ∑^3_k=1q_i(q̅_k q_k)q̅_j=∑^3_k=1M_ikM_kj=(M^2)_ij, with the SU(4) q q̅ matrix defined as M=( [ uu̅ ud̅ us̅ uc̅; du̅ dd̅ ds̅ dc̅; su̅ sd̅ ss̅ sc̅; cu̅ cd̅ cs̅ cc̅ ]), which could be expressed in terms of the physical pseudoscalar mesons as <cit.>, M = ( [ π^0/√(2)+ η/√(3)+η^'/√(6) π^+ K^+ D̅^0; π^- -π^0/√(2)+η/√(3)+η^'/√(6) K^0 D^-; K^- K̅^0 -η/√(3) +√(2/3)η^' D_s^-; D^0 D^+ D_s^+ η_c ]). Thus, by isolating the meson K^-, one could easily obtain the components of the meson systems for Figs. <ref>(a) and  <ref>(b) as follows: | H ⟩^a = V_p V_cb V_cs^∗ c(u̅ u + d̅ d + s̅ s) c̅su̅ = V_p V_cb V_cs^∗(M^2)_44 K^- = V_p V_cb V_cs^∗( D^0 D̅^0 + D^+ D^- + D_s^+ D_s^- ) K^-, | H ⟩^b = V_p V_cb V_cs cc̅s(u̅ u + d̅ d + s̅ s) u̅ = V_p V_cb V_cs^∗(M^2)_31η_c = V_p V_cb V_cs^∗( 1/√(2)K^- π^0 + 3/√(6)K^- η^') η_c, where V_cb=0.04182 and V_cs=0.97349 are the elements of the CKM matrix, and V_p encodes all the remaining factors arising from the production vertex. Then, the final state interactions of DD̅, D_sD̅_s, and η'η_c will dynamically generate the DD̅ bound state, which could decay into ηη_c system. Here we do not consider the component K^-π^0η_c, since the isospin of the π^0η_c system is I=1. Similarly, we can write the hadron components for Figs. <ref>(c) and  <ref>(d) that could couple to the K^-ηη_c system as follows: | H ⟩^c = V_p V_cb V_cs^∗× C ×( K^- D_s^+ ) D_s^-, | H ⟩^d = V_p V_cb V_cs^∗× C ×( K^- D̅^0 ) D^0, where we have introduced the color factor C to account for the relative weight of the external W^- emission mechanisms with respect to the internal W^- emission mechanism, and will take C=3 in the case of color number N_C=3, as done in Refs. <cit.>. According to the above discussions, the K^- ηη_c final state could not be produced directly through the tree-level diagrams of the B^- decay, but can via the final state interactions of the coupled channels D^0 D̅^0, D^+ D^-, D_s^+ D_s^-, and η'η_c, which could then generate the DD̅ bound state, as shown in Fig. <ref>. The total amplitude of Fig. <ref> can be expressed as 𝒯_X = V_p V_cb V_cs^∗[ G_D^+ D^- t_D^+ D^- →ηη_c. . + (1+C) × G_D^0 D̅^0 t_D^0 D̅^0 →ηη_c. . + (1+C) × G_D_s^+ D_s^- t_D_s^+ D_s^- →ηη_c. . + 3/√(6)× G_η'η_c t_η'η_c →ηη_c], where G_l is the loop function for the two-meson propagator in the l-th channel, and its explicit expression is given by <cit.> G_l = i ∫d^4 q/(2π)^41/q^2 - m_1^2 + iϵ1/(P-q)^2 - m_2^2 + iϵ = 1/16π^2[α_l + lnm_1^2/μ^2 + m_2^2 - m_1^2 + s/2slnm_2^2/m_1^2. + p/√(s)×(lns - m_2^2 + m_1^2 + 2p√(s)/-s + m_2^2 - m_1^2 + 2p √(s). . . + lns + m_2^2 - m_1^2 + 2p√(s)/-s - m_2^2 + m_1^2 + 2p √(s)) ], with the subtraction constant α_l= -1.3 for the coupled channels D^+ D^-, D^0 D̅^0, D_s^+ D_s^-, and η^'η_c, and μ= 1500 MeV, being the same as used in Ref. <cit.>. √(s)=M_ηη_c is the invariant mass of the two mesons in the l-th channel, and m_1 and m_2 are the mass of these two mesons. P is the total four-momentum of the two mesons in the l-th channel, and p is the magnitude of the three-momentum of each meson in the meson-meson center of mass frame, with p = λ^1/2( s, m_1^2, m_2^2 )/2 √(s), where λ(x,y,z) = x^2 + y^2 + z^2 - 2xy - 2yz -2zx is the Källen function. The transition amplitudes in Eq. (<ref>) can be generically written as t_j → k = g_j × g_k/M_ηη_c^2 - M_X(3700)^2 + i M_X(3700)Γ_X(3700), where the mass M_X(3700) = 3722 MeV, the width Γ_X(3700) = 36 MeV, and the coupling constants g_j are taken from Ref. <cit.>. For convenience, we also show in Table <ref> the values of these couplings. On the other hand, the B^- → K^- ηη_c decay could also proceed via the intermediate excited kaon mesons. According to the Dalitz plot shown in Fig. <ref>, one can see that only the well-established resonance K^*_0(1430) could contribute to this process, since the K^*_0(1430) couples to the channel K^-η in an S-wave way with a branching fraction ℬ(K^*_0(1430)→ Kη)=(8.6^+2.7_-3.4)% <cit.>. Therefore, in this paper, we will neglect all the other excited kaon mesons, and only take into account the contribution from the intermediate K^*_0(1430) as shown by Fig. <ref>, whose amplitude can be expressed as 𝒯_K^*_0 = V_p×β× M_K^*_0(1430)^2/M_K^- η^2 - M_K^*_0(1430)^2 + i M_K^*_0(1430)Γ_K^*_0(1430), where the parameter β stands for the relative weight of the K^*_0(1430) contribution with respect to that of the DD̅ bound state X(3700), and M_K^- η is the invariant mass of the K^- η system. We will take as input M_K^*_0(1430) = 1425 MeV and Γ_K^*_0(1430) = 270 MeV <cit.>. With the amplitudes of Eqs. (<ref>) and (<ref>) at hand, the doubly differential decay width of the B^- → K^- ηη_c process can be written as d^2 Γ/dM_ηη_cdM_K^- η = 1/(2 π)^3M_ηη_c M_K^- η/8 M_B^-^3|𝒯_X + 𝒯_K^*_0|^2. The differential decay width dΓ/dM_ηη_c can then be obtained by integrating Eq. (<ref>) over the K^- η invariant mass M_K^- η, whose integration range is given by ( M^2_K^- η)_min = ( E_K^-^* + E_η^* )^2 - ( √(E_η^*2 - m_η^2) + √(E_K^-^*2 - m_K^-^2))^2, ( M^2_K^- η)_max = ( E_K^-^* + E_η^* )^2 - ( √(E_η^*2 - m_η^2) - √(E_K^-^*2 - m_K^-^2))^2, where E_K^-^* and E_η^* are the energies of K^- and η in the ηη_c rest frame, respectively. Explicitly, we have E_K^-^* = M^2_B^- - M^2_ηη_c - M^2_K^-/2 M_ηη_c, E_η^* = M^2_ηη_c - M^2_η_c + M^2_η/2 M_ηη_c. Similarly, we can also obtain the differential decay width dΓ/dM_K^- η by integrating Eq. (<ref>) over the ηη_c invariant mass M_ηη_c, and the range of integration can be obtained by exchanging K^- and η_c in Eqs. (<ref>)–(<ref>). Finally, by integrating the differential width dΓ/dM_ηη_c (dΓ/dM_K^- η) over M_ηη_c (M_K^- η), we can obtain the partial decay width of the B^- → K^- ηη_c process, Γ = ∫dM_ηη_c∫dM_K^- η1/(2 π)^3M_ηη_c M_K^- η/8 M_B^-^3|𝒯_X + 𝒯_K^*_0|^2. Here all the meson masses involved are taken from the Particle Data Group <cit.>. § RESULTS AND DISCUSSION In our model, we have two free parameters, V_p and β. The parameter V_p is a global factor and its value does not affect the shapes of the ηη_c and K^- η invariant mass distributions, and thus we take V_p=1 for simplicity. The parameter β represents the relative weight of the contribution from K^*_0(1430) with respect to that from X(3700), and we take the default value β=0.004 in order to make the contributions from X(3700) and K^*_0(1430) within the same order of magnitude. Firstly, we show in Fig. <ref> the normalized ηη_c and K^- η invariant mass distributions with β=0.004. One can see a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which should be associated with the D D̅ bound state X(3700). In addition, a K^*_0(1430) signal appears in the K^- η invariant mass distribution, but gives rise to a smooth shape in the ηη_c invariant mass distribution and does not affect the peak structure of the X(3700) significantly. It should be stressed that the line shape of the X(3700) in the ηη_c invariant mass distribution is different from that of a Breit-Wigner form, which is a typical feature of the DD̅ molecular state. We also show in Fig. <ref> the Dalitz plot for the B^- → K^- ηη_c decay in the (M_ηη_c^2, M_K^- η^2) plane, where one can see two clear bands corresponding to the X(3700) and K^*_0(1430) resonances, respectively. The value of the parameter β is unknown, and could be determined if the experimental measurements of the B^- → K^- ηη_c decay are available in the future. In order to study the dependence of our results on β, we show in Fig. <ref> the predicted ηη_c and K^- η (b) invariant mass distributions of the process with three different values of β = 0.003, 0.004, 0.005. One can see that the peak of the K^*_0(1430) resonance in the K^- η invariant mass distribution becomes more significant when the value of β increases. The signal corresponding to the D D̅ bound state X(3700) is, however, always clear in the ηη_c invariant mass distribution. On the other hand, the value of the color factor C, which represents the relative weight of the external W^- emission mechanism with respect to the internal W^- emission mechanism, could vary around 3 in order to account for the potential nonfactorizable contributions <cit.>. To this end, we show in Fig. <ref> the normalized ηη_c and K^- η invariant mass distributions of the B^- → K^- ηη_c decay by taking three different values of C = 3.0, 2.5, 2.0. One can see that, although the peak of the X(3700) state in the ηη_c invariant mass distribution becomes weaker when the value of C decreases, its signal is still clear and will be easy to be distinguished from the background contribution. Meanwhile, the peak of the K^*_0(1430) resonance in the K^-η invariant mass distribution has little changes for these three different values of the parameter C, because the contribution from the DD̅ bound state is smooth around the peak of K^*_0(1430) in the K^-η invariant mass distribution. From the above analyses, one can find that within the variation ranges of the two free parameters, there is always a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which corresponds to the D D̅ bound state. Thus, we suggest strongly that our experimental colleagues can perform more precise measurements of the B^- → K^- ηη_c decay at the Belle II and LHCb experiments in the future, which is very important for confirming the existence of the predicted D D̅ bound state. § CONCLUSIONS In this paper, motivated by the theoretical predictions for the DD̅ bound state, we propose to search for this state in the B^- → K^- ηη_c decay. To this end, we have investigated the process within the unitary coupled-channel approach, by taking into account the contributions from the S-wave pseudoscalar meson–pseudoscalar meson interactions, which can dynamically generate the DD̅ bound state X(3700). We have also taken into account the contribution from the intermediate resonance K^*_0(1430), since it couples to the Kη channel in an S-wave way with a branching fraction of ℬ(K^*_0(1430)→ Kη)=(8.6^+2.7_-3.4)%. Our results show that a clear peak appears around 3730 MeV in the ηη_c invariant mass distribution, which should be associated with the DD̅ bound state. It should be stressed that the line shape of the DD̅ bound state is significantly different from that of a Breit-Winger form, which is a typical feature of the DD̅ molecular state. On the other hand, one can also find the peak of the resonance K^*_0(1430) in the K^-η invariant mass distribution, and the resonance gives a smooth contribution in the ηη_c invariant mass distribution. In summary, we strongly encourage our experimental colleagues to perform a more precise measurement of the B^- → K^- ηη_c decay at the Belle II and LHCb experiments in the future, which will be very helpful to confirm the existence of the predicted D D̅ bound state, as well as to deepen our understanding of the hadron-hadron interactions. § ACKNOWLEDGEMENTS This work is supported by the National Natural Science Foundation of China under Grant Nos. 12135006, 12075097 and 12192263, the Natural Science Foundation of Henan under Grand Nos. 222300420554 and 232300421140, the Project of Youth Backbone Teachers of Colleges and Universities of Henan Province (2020GGJS017), the Youth Talent Support Project of Henan (2021HYTP002), the Open Project of Guangxi Key Laboratory of Nuclear Physics and Nuclear Technology (No. NLK2021-08), as well as the Fundamental Research Funds for the Central Universities under Grant Nos. CCNU19TD012 and CCNU22LJ004. 99 Belle:2003nnu S. K. Choi et al. [Belle], Observation of a narrow charmonium-like state in exclusive B^±→ K^±π^+ π^- J/ψ decays, Phys. Rev. Lett. 91 (2003), 262001. ParticleDataGroup:2022pth R. L. Workman et al. [Particle Data Group], Review of Particle Physics, PTEP 2022 (2022), 083C01. Pakvasa:2003ea S. Pakvasa and M. Suzuki, On the hidden charm state at 3872 MeV, Phys. Lett. B 579 (2004), 67-73. Chen:2015ata W. Chen, T. G. Steele, H. X. Chen and S. L. Zhu, Mass spectra of Z_c and Z_b exotic states as hadron molecules, Phys. Rev. D 92 (2015), 054002. Molina:2009ct R. Molina and E. Oset, The Y(3940), Z(3930) and the X(4160) as dynamically generated resonances from the vector-vector interaction, Phys. Rev. D 80 (2009), 114013. Guo:2017jvc F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou, Rev. Mod. Phys. 90 (2018) no.1, 015004 [erratum: Rev. Mod. Phys. 94 (2022) no.2, 029901]. Gamermann:2006nm D. Gamermann, E. Oset, D. Strottman and M. J. Vicente Vacas, Dynamically generated open and hidden charm meson systems, Phys. Rev. D 76 (2007), 074016. Gamermann:2009ouq D. Gamermann, E. Oset and B. S. Zou, The radiative decay of ψ(3770) into the predicted scalar state X(3700), Eur. Phys. J. A 41 (2009), 85-91. Prelovsek:2020eiw S. Prelovsek, S. Collins, D. Mohler, M. Padmanath and S. Piemonte, Charmonium-like resonances with J^PC = 0^++, 2^++ in coupled D D̅, D_s D̅_s scattering on the lattice, JHEP 06 (2021), 035. Dong:2021bvy X. K. Dong, F. K. Guo and B. S. Zou, A survey of heavy–heavy hadronic molecules, Commun. Theor. Phys. 73 (2021), 125201. Chen:2021erj H. X. Chen, Hadronic molecules in B decays, Phys. Rev. D 105 (2022) 9, 094003. Shi:2021hzm P. P. Shi, Z. H. Zhang, F. K. Guo and Z. Yang, D^+ D^- hadronic atom and its production in pp and p p̅ collisions, Phys. Rev. D 105 (2022), 034024. Xin:2022bzt Q. Xin, Z. G. Wang and X. S. Yang, Analysis of the X(3960) and related tetraquark molecular states via the QCD sum rules, AAPPS Bull. 32 (2022) 1, 37. Peng:2023lfw F. Z. Peng, M. J. Yan and M. Pavon Valderrama, Heavy- and light-flavor symmetry partners of the T_cc^+(3875), the X(3872) and the X(3960) from light-meson exchange saturation, [arXiv:2304.13515 [hep-ph]]. Gamermann:2007mu D. Gamermann and E. Oset, Hidden charm dynamically generated resonances and the e^+ e^- → J/ψ D D̅, J/ψ D D̅^* reactions, Eur. Phys. J. A 36 (2008), 189-194. Wang:2019evy E. Wang, W. H. Liang and E. Oset, Analysis of the e^+e^- → J/ψ D D̅ reaction close to the threshold concerning claims of a χ_c0(2P) state, Eur. Phys. J. A 57 (2021), 38. Belle:2017egg K. Chilikin et al. [Belle], Observation of an alternative χ_c0(2P) candidate in e^+ e^- → J/ψ D D̅, Phys. Rev. D 95 (2017), 112003. Dai:2015bcc L. R. Dai, J. J. Xie and E. Oset, B^0 → D^0 D̅^0 K^0 , B^+ → D^0 D̅^0 K^+ , and the scalar D D̅ bound state, Eur. Phys. J. C 76 (2016) 3, 121. Belle:2005rte S. Uehara et al. [Belle], Observation of a χ^'_c2 candidate in γγ→ D D̅ production at BELLE, Phys. Rev. Lett. 96 (2006), 082003. BaBar:2010jfn B. Aubert et al. [BaBar], Observation of the χ_c2(2P) meson in the reaction γγ→ D D̅ at BaBar, Phys. Rev. D 81 (2010), 092003. Deineka:2021aeu O. Deineka, I. Danilkin and M. Vanderhaeghen, Dispersive analysis of the γγ→ D D̅ data and the confirmation of the D D̅ bound state, Phys. Lett. B 827 (2022), 136982. Wang:2020elp E. Wang, H. S. Li, W. H. Liang and E. Oset, Analysis of the γγ→ DD̅ reaction and the DD̅ bound state, Phys. Rev. D 103 (2021), 054008. Xiao:2012iq C. W. Xiao and E. Oset, Three methods to detect the predicted D D̅ scalar meson X(3700), Eur. Phys. J. A 49 (2013), 52. Dai:2020yfu L. Dai, G. Toledo and E. Oset, Searching for a D D̅ bound state with the ψ (3770) →γ D^0 D̅^0 decay, Eur. Phys. J. C 80 (2020) 6, 510. Wei:2021usz L. L. Wei, H. S. Li, E. Wang, J. J. Xie, D. M. Li and Y. X. Li, Search for a D D̅ bound state in the Λ_b →Λ DD̅ process, Phys. Rev. D 103 (2021), 114013. BESIII:2023bgk M. Ablikim et al. [BESIII], Search for a scalar partner of the X(3872) via ψ(3770) decays into γηη' and γπ^+π^- J/ψ, [arXiv:2305.11682 [hep-ex]]. Xing:2022uqu Z. P. Xing, F. Huang and W. Wang, Angular distributions for Λ_b →Λ^*_J (p K^-) J/ψ (→ℓ^+ ℓ^-) decays, Phys. Rev. D 106 (2022), 114041. Duan:2023qsg M. Y. Duan, E. Wang and D. Y. Chen, Searching for the open flavor tetraquark T^++_cs̅0(2900) in the process B^+→ K^+ D^+ D^-, [arXiv:2305.09436 [hep-ph]]. Lyu:2023jos W. T. Lyu, Y. H. Lyu, M. Y. Duan, D. M. Li, D. Y. Chen and E. Wang, The roles of the T_cs̅0(2900)^0 and D_0^*(2300) in the process B^-→ D_s^+K^-π^-, [arXiv:2306.16101 [hep-ph]]. Bediaga:2020qxg I. Bediaga and C. Göbel, Direct CP violation in beauty and charm hadron decays, Prog. Part. Nucl. Phys. 114, 103808 (2020). Wang:2021aql F. L. Wang, X. D. Yang, R. Chen and X. Liu, Correlation of the hidden-charm molecular tetraquarks and the charmoniumlike structures existing in the B→ XYZ+K process, Phys. Rev. D 104 (2021), 094010. Dai:2018nmw L. R. Dai, G. Y. Wang, X. Chen, E. Wang, E. Oset and D. M. Li, The B^+→ J/ψω K^+ reaction and D^∗D̅^∗ molecular states, Eur. Phys. J. A 55 (2019) no.3, 36. Zhang:2020rqr Y. Zhang, E. Wang, D. M. Li and Y. X. Li, Search for the D^*D̅^* molecular state Z_c(4000) in the reaction B^-→ J/ψρ^0 K^-, Chin. Phys. C 44 (2020) no.9, 093107. Wang:2017mrt E. Wang, J. J. Xie, L. S. Geng and E. Oset, Analysis of the B^+→ J/ψϕ K^+ data at low J/ψϕ invariant masses and the X(4140) and X(4160) resonances, Phys. Rev. D 97 (2018), 014017. LHCb:2021uow R. Aaij et al. [LHCb], Observation of New Resonances Decaying to J/ψ K^+ and J/ψϕ, Phys. Rev. Lett. 127 (2021), 082001. CDF:2009jgo T. Aaltonen et al. [CDF], Evidence for a Narrow Near-Threshold Structure in the J/ψϕ Mass Spectrum in B^+→ J/ψϕ K^+ Decays, Phys. Rev. Lett. 102 (2009), 242002. D0:2013jvp V. M. Abazov et al. [D0], Search for the X(4140) state in B^+ → J/ψϕ K^+ decays with the D0 Detector, Phys. Rev. D 89 (2014), 012004. LHCb:2020bls R. Aaij et al. [LHCb], A model-independent study of resonant structure in B^+→ D^+D^-K^+ decays, Phys. Rev. Lett. 125 (2020), 242001. LHCb:2020pxc R. Aaij et al. [LHCb], Amplitude analysis of the B^+→ D^+D^-K^+ decay, Phys. Rev. D 102 (2020), 112003. Belle:2015yoa A. Vinokurova et al. [Belle], Search for B decays to final states with the η_c meson, JHEP 06 (2015), 132 [erratum: JHEP 02 (2017), 088]. Belle-II:2018jsg E. Kou et al. [Belle-II], The Belle II Physics Book, PTEP 2019 (2019), 123C01 [erratum: PTEP 2020 (2020), 029201]. Bhardwaj:2018ffc V. Bhardwaj [Belle-II], Prospects in spectroscopy with Belle II, Springer Proc. Phys. 234 (2019), 181-187. Xie:2022lyw J. M. Xie, M. Z. Liu and L. S. Geng, Production rates of D_s^+ D_s^- and D D̅ molecules in B decays, Phys. Rev. D 107 (2023), 016003. Wang:2020pem Z. Wang, Y. Y. Wang, E. Wang, D. M. Li and J. J. Xie, The scalar f_0(500) and f_0(980) resonances and vector mesons in the single Cabibbo-suppressed decays Λ_c → p K^+K^- and pπ^+π^-, Eur. Phys. J. C 80 (2020) 9, 842. Wang:2021naf J. Y. Wang, M. Y. Duan, G. Y. Wang, D. M. Li, L. J. Liu and E. Wang, The a_0(980) and f_0(980) in the process D_s^+ → K^+ K^- π^+, Phys. Lett. B 821 (2021), 136617. Liu:2020ajv W. Y. Liu, W. Hao, G. Y. Wang, Y. Y. Wang, E. Wang and D. M. Li, Resonances X(4140), X(4160), and P_cs(4459) in the decay of Λ_b→ J/ψΛϕ, Phys. Rev. D 103 (2021), 034019. Duan:2020vye M. Y. Duan, J. Y. Wang, G. Y. Wang, E. Wang and D. M. Li, Role of scalar a_0(980) in the single Cabibbo suppressed process D^+ →π ^+π ^0η, Eur. Phys. J. C 80 (2020) 11, 1041. Zhang:2022xpf H. Zhang, Y. H. Lyu, L. J. Liu and E. Wang, Role of the scalar f_0(980) in the process D_s^+ →π^+π^0π^0, Chin. Phys. C 47 (2023) no.4, 043101. Li:2020fqp X. C. Feng, L. L. Wei, M. Y. Duan, E. Wang and D. M. Li, The a_0(980) in the single Cabibbo-suppressed process Λ_c →π^0η p, [arXiv:2009.08600 [hep-ph]]. Ali:1998eb A. Ali, G. Kramer and C. D. Lu, Experimental tests of factorization in charmless nonleptonic two-body B decays, Phys. Rev. D 58, 094009 (1998).
http://arxiv.org/abs/2307.04539v1
20230710130519
Neural functional theory for inhomogeneous fluids: Fundamentals and applications
[ "Florian Sammüller", "Sophie Hermann", "Daniel de las Heras", "Matthias Schmidt" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.stat-mech" ]
[email protected] Theoretische Physik II, Physikalisches Institut, Universität Bayreuth, D-95447 Bayreuth, Germany We present a hybrid scheme based on classical density functional theory and machine learning for determining the equilibrium structure and thermodynamics of inhomogeneous fluids. The exact functional map from the density profile to the one-body direct correlation function is represented locally by a deep neural network. We substantiate the general framework for the hard sphere fluid and use grand canonical Monte Carlo simulation data of systems in randomized external environments during training and as reference. Functional calculus is implemented on the basis of the neural network to access higher-order correlation functions via automatic differentiation and the free energy via functional line integration. Thermal Noether sum rules are validated explicitly. We demonstrate the use of the neural functional in the self-consistent calculation of density profiles. The results outperform those from state-of-the-art fundamental measure density functional theory. The low cost of solving an associated Euler-Lagrange equation allows to bridge the gap from the system size of the original training data to macroscopic predictions upon maintaining near-simulation microscopic precision. These results establish the machine learning of functionals as an effective tool in the multiscale description of soft matter. Neural functional theory for inhomogeneous fluids: Fundamentals and applications Matthias Schmidt August 12, 2023 ================================================================================ § INTRODUCTION The problem with density functional theory (DFT) is that you do not know the density functional. Although this quip by the late and great Yasha Rosenfeld <cit.> was certainly meant in jest to a certain degree, it does epitomize a structural assessment of classical DFT <cit.>. As a general formulation of many-body statistical physics, the framework comprises a beautiful and far reaching skeleton of mathematical formalism centered around a formally exact variational minimization principle <cit.>. In practice however, the theory needs to be fleshed out by approximations of all means conceivable in our efforts to get to grips with the coupled many-body problem that is under consideration. Specifically, it is the excess (over ideal gas) intrinsic Helmholtz free energy [ρ], expressed as a functional of the position-resolved density profile ρ(r⃗), which needs to be approximated. Decades of significant theoretical efforts have provided us with a single exact functional, that for nonoverlapping hard rods in one spatial dimension, as obtained by another hero in the field, Jerry Percus <cit.>. Nevertheless, useful DFT approximations range from the local density approximation for large scale features which are decoupled from microscopic length scales, to square-gradient functionals with their roots in the 19th century, to the arguably most important modern development, that of the fundamental measure theory (FMT) as kicked off by Rosenfeld in 1989 <cit.> and much refined ever since <cit.>. FMT is a geometry-based framework for the description of hard sphere systems and it has deep roots in the Percus-Yevick <cit.> and scaled-particle theories <cit.>, which Rosenfeld was able to unify and generalize based on his unique theoretical insights <cit.>. The realm of soft matter <cit.> stretches far beyond the hard sphere fluid. FMT remains relevant though in the description of a reference system as used e.g. in studies of hydrophobicity, where the behaviour of realistic water models <cit.> is traced back to the simpler Lennard-Jones fluid, which in turn is approximated via the hard sphere FMT functional plus a mean-field contribution for describing interparticle attraction <cit.>. Further topical uses of FMT include the analysis of the three-dimensional electrolyte structure near a solid surface <cit.> and the problem of the decay length of correlations in electrolytes <cit.>. There is a current surge in the use of machine learning techniques in soft matter, e.g. for its characterization <cit.>, engineering of self-assembly <cit.>, structure detection <cit.>, and for learning many-body potentials <cit.>. Within classical DFT, machine learning was used to address ordering of confined liquid crystals <cit.>, and free energy functionals were obtained for one-dimensional systems from convolutional <cit.> and equation-learning <cit.> networks as well as within a Bayesian inference approach <cit.>. <cit.> used machine learning to improve the standard mean-field approximation of the excess Helmholtz free-energy functional for the Lennard-Jones fluid. In nonequilibrium, <cit.> have reported a method to machine-learn the functional relationship of the local internal force for a steady uniaxial compressional flow of a Lennard-Jones fluid at constant temperature. As prescribed by power functional theory <cit.>, the functional dependence in nonequilibrium not only incorporates the density profile but also the one-body current. In this work, we return to the problem of describing and predicting the structure and thermodynamics of inhomogeneous equilibrium fluids. We show that a neural network can be trained to accurately represent the functional dependence of the one-body direct correlation function with respect to the density profile. While the presented methods are applicable to virtually arbitrary fluids with short-ranged interparticle interactions, we focus here on the well-studied hard-sphere fluid in order to exemplify our framework and to challenge the available highly accurate analytic approaches from liquid integral equation theory and FMT. Reference data for training and testing the model is provided by grand canonical Monte Carlo (GCMC) simulations that cover a broad range of randomized inhomogeneous environments in planar geometry. We implement functional calculus on the basis of the trained neural functional to infer related physical quantities and demonstrate their consistency with known literature results both in bulk and in inhomogeneous systems. In particular, we highlight the accessibility of the fluid pair structure, the determination of free energies and equations of state as well as the validation of thermal Noether sum rules <cit.>. These results corroborate that the neural functional exceeds its role as a mere interpolation device and instead possesses significant representational power as a genuine density functional for the prediction of nontrivially related physical properties. We apply the trained neural network in the DFT Euler-Lagrange equation, which enables the self-consistent calculation of density profiles and which hence constitutes a neural-network-based DFT or short neural DFT. This method alleviates conventional DFT from the burden of having to find suitable analytic approximations while still surpassing even the most profound existing treatments of the considered hard sphere fluid via FMT functionals <cit.> in accuracy. We further demonstrate the fitness of the method for the straightforward application to multiscale problems. Neural DFT therefore provides a way to transfer near-simulation microscopic precision to macroscopic length scales, which serves as a technique to predict properties of inhomogeneous systems which far exceed typical box sizes of the original training data. This work is structured as follows. The relevant physical background of liquid state theory is provided in Sec. <ref>. Details of the simulations as well as of the neural network are given in Secs. <ref> and <ref>. The training procedure and results for the achieved metrics that measure its convergence are presented in Sec. <ref>. We proceed by testing physical properties of the trained model and use automatic differentiation of the neural network in Sec. <ref> to access pair correlations, which are then compared to bulk results from both the Percus-Yevick theory and from simulations. The consistency of the neural direct correlation functional to satisfy thermal Noether sum rules is validated in Sec. <ref>, and different ways to obtain the bulk equation of state as well as free energies in inhomogeneous systems are given in Sec. <ref>. In Sec. <ref>, we show the application of the neural functional to the self-consistent calculation of density profiles via the DFT Euler-Lagrange equation and describe the technical details and conceptual advantages of this neural DFT over analytic approaches. In Sec. <ref>, the results are compared to those from FMT, and in Sec. <ref>, the relevance of the method for making macroscopic predictions is illustrated for cases of randomized external potential and for sedimentation between hard walls on length scales that far exceed the training simulation box sizes. We conclude in Sec. <ref> and give an outlook to possible improvements and extensions of the method as well as to its application for different fluid types, in more general geometries and in nonequilibrium. § MACHINE LEARNING INTRINSIC CORRELATIONS §.§ Physical background We start with the standard relation for the one-body direct correlation function c_1(r⃗) of liquid state theory <cit.>, c_1(r⃗) = lnρ(r⃗) + β(r⃗) - βμ, where r⃗ denotes the spatial position and β = 1 / (k_B T) with the Boltzmann constant k_B and absolute temperature T. The three terms on the right hand side of Eq. (<ref>) represent respectively the ideal gas contribution, the external potential (r⃗) and the influence of the particle bath at chemical potential μ. The logarithm in Eq. (<ref>) is understood as ln[Λ^3 ρ(r⃗)] with the thermal wavelength Λ, which can be set to the particle size σ without any loss of information in the present classical context. For a prescribed external potential (r⃗), knowledge of the corresponding equilibrium density profile ρ(r⃗) allows to compute c_1(r⃗) explicitly via Eq. (<ref>). This relationship can be viewed as a locally resolved chemical potential balance: the contribution from the ideal gas, k_B T lnρ(r⃗), from the external potential, (r⃗), and from interparticle interactions, - k_B T c_1(r⃗), add up at each position to μ, which is necessarily uniform throughout an equilibrium system. However, the notation in Eq. (<ref>) is oblivious to a central result shown by <cit.> in 1979, thereby kicking off a modern theory for the description of inhomogeneous fluids. For given type of internal interactions, the spatial variation of the function c_1(r⃗) is already uniquely determined by the spatial form of the density profile ρ(r⃗) alone, without the need to invoke the external potential explicitly. From this vantage point of classical DFT, the dependence of c_1(r⃗) on ρ(r⃗) is not merely pointwise but rather with respect to the values of the entire density profile, which determine c_1(r⃗) at each given position r⃗. Formally, this relationship is exact <cit.> and it constitutes a functional dependence c_1(r⃗; [ρ]), which is indicated by brackets here and in the following and which is in general nonlinear and nonlocal. As we will demonstrate, the existence of such a universal functional mapping makes the problem of investigating inhomogeneous fluids particularly amenable to supervised machine learning techniques. In most formulations of classical DFT, one exploits the fact that the intrinsic excess free energy functional [ρ] acts as a functional generator such that the one-body direct correlation function is obtained via functional differentiation with respect to the density profile, c_1(r⃗; [ρ]) = - δβ[ρ]/δρ(r⃗). A compact description of standard formulae for the calculation of functional derivatives can be found in Ref. Schmidt2022. In order to make progress in concrete applications, one typically needs to rely on using an approximate form of [ρ] for the specific model under consideration, as determined by its interparticle interactions. DFT is a powerful framework, as using c_1(r⃗; [ρ]) obtained from Eq. (<ref>) with a suitable expression for [ρ] turns Eq. (<ref>) into an implicit equation for the equilibrium density profile ρ(r⃗). In the presence of a known form of (r⃗), one can typically solve Eq. (<ref>) very efficiently, allowing ease of parameter sweeps, e.g. for exhaustive phase diagram explorations. On the downside, [ρ] and thus also c_1(r⃗; [ρ]) remain approximative and the development of analytic tools has certainly slowed down over several years if not decades. Here we proceed differently and bypass the excess free energy functional [ρ] at first. Instead, we use a deep neural network to learn and to represent the functional relationship ρ(r⃗) → c_1(r⃗) directly, which has significant advantages both for the generation of suitable training data as well as for the applicability of the model in the determination of fluid equilibria. This investigation is based on GCMC simulations that serve to provide training, validation and test data. Discriminating between these three roles of use is standard practice in machine learning and we give further details below. §.§ Simulation method Generating the simulation data is straightforward and we use the following strategy, adopted to planar situations where the position-dependence is on a single position variable x while the system remains translationally invariant in the y- and z-direction. This geometry is highly relevant to identify the physics in planar capillary and adsorption situations and facilitates ease of accurate sampling. We employ randomized simulation conditions by generating external potentials of the form (x) = ∑_n=1^4 A_n sin(2 π n x/L + ϕ_n) + ∑_n V_n^lin(x), where A_n and ϕ_n are randomly selected Fourier coefficients and phases, respectively, and L is the simulation box length in x-direction. We choose L = 20 σ, although there is no specific compliance requirement for the neural network (see below), and the lateral box lengths are set to 10σ to minimize finite-size effects. Periodic boundary conditions apply in all spatial directions. The sinusoidal terms in (x) are complemented by up to five piecewise linear functions V^lin(x) = V_1 + (V_2 - V_1) (x - x_1) / (x_2 - x_1) for x_1 < x < x_2 and 0 otherwise, for which the parameters 0 < x_1 < x_2 < L, V_1 and V_2 are again chosen randomly. Additionally, we explicitly impose planar hard walls in a subset of the simulations by setting (x) = ∞ for x < x_w/2 and x > L - x_w/2, i.e. near the borders of the simulation domain; the width x_w of the wall is chosen randomly in the interval 1 ≤ x_w / σ≤ 3. To cover a broad range from dilute to dense systems, the chemical potential is chosen randomly within the range -5 ≤βμ≤ 10 for each respective GCMC simulation run. The observed mean densities range from 0.006 σ^-3 to 0.803 σ^-3, yet smaller and much larger local densities occur due to the inhomogeneous nature of the systems. In total, 750 such GCMC runs are used, where for given form of (x) the planar one-body profiles ρ(x) and c_1(x) are obtained. The former is acquired from straightforward histogram filling and the latter from evaluating Eq. (<ref>) on the basis of the sampled histogram for ρ(x) as well as the known form of (x) and value of μ for the specific run under consideration. As Eq. (<ref>) is undefined for vanishing density, we have excluded regions where ρ(x) = 0 such as within the hard walls. By modern standards of computational resources, the workload for the generation of the simulation data is only moderate at a total CPU time of ∼ 10^4 hours. §.§ Neural network We use a deep neural network <cit.> to represent the functional map from the density profile to the local value of the one-body direct correlation function at a given point. That is, instead of the entire function, we construct the network to output only the scalar value c_1(x) for a certain position x when supplied with the surrounding inhomogeneous density. The relevant section of the density profile comprises the values of ρ(x) in a specified window around a considered location x, as described below. Despite the locality of the method, access to the entire (discretized) one-body direct correlation profile is immediate via evaluation of the neural network at pertinent positions x across the domain of interest. Multiple local evaluations of the network remain performant on highly parallel hardware such as GPUs when passing the input accordingly in batches. A schematic picture of the network architecture is given in Fig. <ref> and is explained in the following. The functional dependence on the density profile is realized by providing discretized values of ρ(x) on an equidistant grid with resolution Δ x = 0.01 σ. As c_1(x; [ρ]) depends only on the immediately surrounding density profile around a fixed location x, we restrict the input range x' to a sufficiently large window x' ≤ |x - x_c|. We choose the cutoff x_c = 2.56 σ based on simulation data for the bulk direct correlation function <cit.> and on the evaluation of training metrics for different window sizes x_c. Increasing the value of x_c further led to no improvement in the performance of the trained neural network. This behavior is expected from theoretical considerations, as the one-body direct correlation function vanishes quickly for short-ranged pair potentials <cit.>. We recall that in FMT, x_c = σ by construction. Note that the choice of c_1(x; [ρ]) as our target functional is not coincidental, but that its quick spatial decay rather is a pivotal characteristic central to the success of our method. To contrast this, assume that one attempts to model the functional mapping μ_loc(x) = μ - (x) →ρ(x), thereby naively imitating the simulation procedure. This task poses major challenges due to the long-range nature of density correlations induced by an external potential, which is circumvented in our case by the choice of a more manageable target functional. The input layer involves 513 nodes and is followed by three fully-connected hidden layers with 512 units each. The output layer consists of a single node for the scalar value of c_1(x) at the specified location x. Crucially, continuously differentiable activation functions such as the exponential linear unit or the softplus function are used throughout the network for the realization of nonlinearities. This leads to substantial improvements particularly when evaluating two-body quantities via automatic differentiation (see Secs. <ref> and <ref>) as compared to the standard rectified linear unit (ReLU). We attribute this superior performance to the fact that activation functions which are not continuously differentiable and which vanish in certain domain ranges (such as ReLU) reinforce sparsity of the activation output within the hidden layers <cit.>. While this property is desired in many machine learning tasks (e.g. for classification), it hinders the accurate representation of the functional relation c_1(x; [ρ]) in our case. The resulting neural functional for the one-body direct correlation function is denoted in the following by c_1^⋆(x; [ρ]) and related quantities which follow from its inference are marked accordingly by a superscript star. §.§ Training procedure and metrics The machine learning routines are implemented in Keras/Tensorflow <cit.> and we use the standard Adam <cit.> optimizer for the adjustment of the network parameters in order to fit c_1^⋆(x; [ρ]) against the simulation reference c_1(x). The problem at hand is a regression task. Hence, the mean squared error is chosen as a suitable loss function and the mean average error serves as a validation metric. Since the model shall infer the pointwise value c_1(x) from a density section around a specified location x, see Fig. <ref>, the simulation data cannot be passed as is to the neural network. Instead, windowed views of the density profile have to be generated prior to the training loop, which correspond to the target value c_1(x) at the center x of the respective window. A periodic continuation of all simulation profiles is valid due to periodic boundary conditions. Additionally, we use data augmentation to benefit from the inherent mirror symmetry (i.e. x → -x) of the problem and thus effectively double the number of training data sets. As is customary, we separate the independent simulation results prior to performing the machine learning routines: 150 are kept aside as a test set, 150 serve as validation data to monitor training progress and 450 are used for the actual training of the neural network. Modeling the functional relationship of c_1(x; [ρ]) locally, i.e. inferring pointwise values individually instead of outputting the entire profile at once, has numerous conceptual and practical advantages. Regarding the feasibility of the neural network in concrete applications, one is free to choose an arbitrary box length L when gathering training data and more importantly to readjust the value of L when using the trained neural network for making predictions (cf. Sec. <ref>). From a physical point of view, providing only local density information has the merit of already capturing the correlated short-range behavior of c_1(x; [ρ]). If the neural network were to output the entire one-body direct correlation profile from a given density profile ρ(x) at once, this inherent locality would have to be learned instead, hence leading to a much more elaborate training process. Lastly, the fine-grained nature of the training data turns out to be highly beneficial from a machine learning perspective. Note that one can generate 9 · 10^5 input-output pairs from 450 training simulations in the present context (with the values being doubled after data augmentation). The increased cardinality of the training set enables better generalization of the model and also prevents overfitting, e.g. to the statistical noise of the sampled profiles. We train the model for 100 epochs in batches of size 256 and decrease the learning rate exponentially by ∼ 5% per epoch from an initial value of 0.001. This results in a best mean average error of 0.0022 over the validation set, which is of the same order as the estimated average noise of the simulation data for c_1(x). Therefore, we deem our neural network to possess full representational power of the local functional relationship c_1(x; [ρ]) within the conditions of the provided simulation data. § EXAMINING THE NEURAL CORRELATION FUNCTIONAL §.§ Two-body bulk correlations Besides monitoring standard metrics such as the mean average error over a test set, arguably deeper physical insights into the rigorous structure of the statistical mechanics at hand serves for assessing the quality of the neural functional c_1^⋆(x; [ρ]). We first ascertain that the model gives an accurate representation of the physics of bulk fluids. Despite the apparent simplicity of this case, this is a highly nontrivial test as the training data solely covered (strongly) inhomogeneous situations. For this, we investigate the pair structure and aim at implementing the two-body direct correlation functional, which is formally defined as the functional derivative <cit.> c_2(r⃗, r⃗'; [ρ]) = δ c_1(r⃗; [ρ])/δρ(r⃗'). On the basis of the neural network, we can make use of the powerful automatic differentiation techniques. This allows to create an immediate analog of Eq. (<ref>) via c_2^⋆(x, x'; [ρ]) = δ c_1^⋆(x; [ρ]) / δρ(x'), where the functional derivative δ / δρ(x') is evaluated by reverse mode automatic differentiation with respect to the input values of the discretized density profile. In common machine learning frameworks, this requires only high-level code (e.g.  in Keras/Tensorflow <cit.>). The numerical evaluation of c_2^⋆(x, x'; [ρ]) is performant as reverse mode automatic differentiation generates executable code that is suitable for building derivatives with respect to multiple input variables simultaneously. We obtain the bulk direct correlation function in planar geometry as the special case c̅_2^b(x, ρ_b) = c_2(0, x; [ρ_b]), where we have introduced the bulk density ρ_b(x) = ρ_b = const. (In the notation, the parametric dependence on ρ_b is dropped in the following.) Note that c̅_2^b(x) is distinct from the more common radial representation c_2^b(r), as our geometry implies an integration over the lateral directions y and z, i.e. c̅_2^b(x) = ∫ y z c_2^b(r = √(x^2 + y^2 + z^2)) = 2 π∫_x^∞ r r c_2^b(r), where the last equality follows from using radial coordinates and substitution. We commence by performing a Fourier transform of the planar real space representation c̅_2^b(x) and utilize radial symmetry in Fourier space. This acts as a deconvolution of Eq. (<ref>) and directly yields the radial Fourier (Hankel) transform of c_2^b(r), c̃_2^b(k) = 4 π/k∫_0^∞ r r sin(kr) c_2^b(r). The inverse transform is identical to Eq. (<ref>) up to a factor of (2 π)^-3 upon interchanging r and k. To go further, the bulk Ornstein-Zernike equation <cit.> c̃_2^b(k) = h̃(k)/1 + ρ_b h̃(k) is used to obtain the total correlation function h̃(k) from c̃_2^b(k) in Fourier space after rearrangement. Recall that the radial distribution function follows directly via g(r) = h(r) + 1; here h(r) is the real space representation of h̃(k). The static structure factor S(k) is then given as S(k) = 1 + ρ_b h̃(k). In Fig. <ref>, results of c̅_2^b(x), c̃_2^b(k), h̃(k) and S(k) are shown for different bulk densities ρ_b σ^3 = 0.4, 0.7, 0.9. From our neural functional, we obtain c̅_2^b ⋆(x) = δ c_1^⋆(0; [ρ]) / δρ(x) |_ρ = ρ_b, i.e. the autodifferentiated network is evaluated at spatially constant density ρ_b. The total correlation function and the static structure factor follow from Eqs. (<ref>) and (<ref>) after having computed c̃_2^b ⋆(k) via a numerical Fourier transform of c̅_2^b ⋆(x). For comparison, we also depict reference data obtained analytically from the Percus-Yevick theory <cit.> and reproduced from simulation results of <cit.>. Good agreement is found between simulation and the autodifferentiated neural network, while the Percus-Yevick result shows noticeable deviations in c̅_2^b(x). The latter overestimates the depth of the core region x < σ and this discrepancy increases for larger bulk densities. The neural functional yields a clear improvement over the Percus-Yevick theory and shows only marginal differences to the simulation results of Ref. Groot1987 for both the planar real space and the radial Fourier space representation of the two-body direct correlation function. In h̃(k) and S(k), the severity of the discrepancies of simulation and machine-learning data to the Percus-Yevick results decreases, but a difference is still noticeable in particular for large bulk densities. A slight mismatch to the simulation reference is observed in the magnitude and phase of the oscillations of the Percus-Yevick static structure factor S_PY(k), and this correction is reproduced very well by the neural functional. Note that although one arrives at radial representations of the quantities c̃_2^b(k), h̃(k) and S(k) in Fourier space, performing the radial backtransform to real space numerically according to the inverse of Eq. (<ref>) is generally a “notoriously difficult task” <cit.> and is not considered here. This successful test reveals that, while being trained solely with one-body profiles, the neural functional c_1^⋆(x; [ρ]) contains full two-body information equivalent in bulk to the radial distribution function g(r). The pair correlations can be accessed via automatic differentiation at low computational cost and they are consistent with known bulk results. We recall that this is a mere byproduct of the neural network and that no such two-body information has been explicitly incorporated in the training. More so, Fig. <ref> demonstrates that the bulk quantities c̅_2^b(x), c̃_2^b(k), h̃(k) and S(k) as obtained from c_1^⋆(x; [ρ]) substantially outperform the Percus-Yevick theory and almost attain simulation quality. In Appendix <ref>, we illustrate that higher-order correlations such as the three-body direct correlation functional c_3^⋆(x, x', x”; [ρ]) follow analogously via nested automatic differentiation. On this level, differences to FMT results are even more prominent than the deviations to the two-body Percus-Yevick results. As we will show in Sec. <ref>, the accuracy of predictions from the neural network also holds in inhomogeneous situations, where FMT serves again as an analogous and arguably even more challenging theoretical baseline than the Percus-Yevick bulk theory. Before doing so, we lay out additional consistency tests and quality assessments that are applicable in inhomogeneous systems. §.§ Noether sum rules In order to further elucidate whether c_1^⋆(x; [ρ]) quantitatively reproduces fundamental properties of equilibrium many-body systems, we make use of exact sum rules that follow from thermal Noether invariance <cit.>: ∇ c_1(r⃗) = ∫r⃗' c_2(r⃗, r⃗') ∇' ρ(r⃗'), ∫r⃗ρ(r⃗) ∫r⃗' ρ(r⃗') ∇ c_2(r⃗, r⃗') = 0. Both Eqs. (<ref>) and (<ref>) apply in any equilibrated inhomogeneous system regardless of the type of internal interactions. While the interparticle interaction potential does not appear explicitly in Eqs. (<ref>) and (<ref>), it nevertheless determines the functionals c_1(r⃗; [ρ]) and c_2(r⃗, r⃗'; [ρ]). Recall that the spatial gradient of the one-body direct correlation function can be identified with the internal equilibrium force profile, f⃗_int(r⃗) = k_B T ∇ c_1(r⃗) <cit.>. We verify that the neural functional complies with the above sum rules (<ref>) and (<ref>) as follows. Analogous to Sec. <ref>, we use autodifferentiation to evaluate Eq. (<ref>), but this time retain the full inhomogeneous structure of c_2^⋆(x, x'; [ρ]). The left hand side of Eq. (<ref>) is obtained straightforwardly from simple evaluation of the neural functional and numerical spatial differentiation. As input for ρ(x), we use the simulated density profiles of the test set. Care is required when evaluating the spatial gradients ∇ρ(x), ∇ c_1^⋆(x; [ρ]) and ∇ c_2^⋆(x, x'; [ρ]) due to the amplification of undesired noise, which we reduce by applying a low-pass filter after having taken the numerical derivatives. The volume integrals reduce in planar geometry to ∫r⃗ = A ∫ x, where A is the lateral system area. In Fig. <ref>, three typical profiles for the left and right hand side of Eq. (<ref>) are shown. In all three systems both sides of the equation coincide up to numerical noise due to the required spatial derivatives. Additionally, we define errors via scalar deviations from equality in Eqs. (<ref>) and (<ref>) respectively as e_1 = ‖∇ c_1(x) - A ∫ x' c_2(x, x') ∇' ρ(x') ‖_∞, e_2 = A^2 ∫ x ρ(x) ∫ x' ρ(x') ∇ c_2(x, x'), where ‖·‖_∞ denotes the maximum norm. Panels (a) and (b) of Fig. <ref> depict results for e_1 and e_2 as a function of the mean density ρ̅ = ∫r⃗ρ(r⃗) / V for all 150 density profiles of the test set, where V denotes the volume of the system. The small magnitudes of the observed error values indicate that the neural network satisfies the Noether identities (<ref>) and (<ref>) to very high accuracy. Outliers can be attributed mostly to the moderate numerical noise of the spatial gradients, see panel (III) in Fig. <ref>, and are no hinderance in practical applications of the neural functional. This confirmation demonstrates that our method transcends the neural network from a mere interpolation device of the simulation training data to a credible standalone theoretical object. The fact that one is able to carry out consistent and performant functional calculus indeed renders c_1^⋆(x; [ρ]) a neural-network-based density functional. Besides functional differentiation, we show next that functional line integration acts as the inverse operation and provides access to the corresponding free energy. Appendix <ref> gives further insight into the symmetry properties of c_2^⋆(x, x'; [ρ]), which serve as a prerequisite for the existence of a generating excess free energy functional ^⋆[ρ]; we recall Eq. (<ref>). §.§ Equation of state and free energy Although the machine learning procedure operates on the level of the one-body direct correlation function, the excess free energy [ρ] is accessible by functional line integration <cit.>: β[ρ] = - ∫_0^1 α∫r⃗ρ(r⃗) c_1(r⃗; [ρ_α]). Here, ρ_α(r⃗) = αρ(r⃗) is a sequence of density profiles that are linearly parametrized by α in the range 0 ≤α≤ 1. The limits are ρ_0(r⃗) = 0 such that [0] = 0, and ρ_1(r⃗) = ρ(r⃗), which is the target density profile that appears as the functional argument on the left hand side of Eq. (<ref>). Other parametrizations of ρ_α(r⃗) are conceivable but change the concrete form of Eq. (<ref>). On the basis of c_1^⋆(x; [ρ]), we implement Eq. (<ref>) via β^⋆[ρ] = - A ∫_0^1 α∫ x ρ(x) c_1^⋆(x; [ρ_α]) and evaluate the integrals numerically; as before A denotes the lateral system area. We first return to bulk systems and illustrate in the following three different routes towards obtaining the bulk equation of state from the neural network. For this, we introduce the excess free energy density as ψ_b(ρ_b) = [ρ_b] / V, where V is the system volume. From the neural functional, the excess free energy density ψ_b^⋆(ρ_b) can be acquired via ^⋆[ρ_b] from functional line integration along a path of bulk densities according to Eq. (<ref>). Alternatively and equivalently, one can simply evaluate the neural direct correlation functional at bulk density ρ_b and due to translational symmetry at arbitrary location (e.g. x = 0) such that c_1^b ⋆ = c_1^⋆(0; [ρ_b]). Simplifying Eq. (<ref>) in bulk reveals that ψ_b^⋆'(ρ_b) = - k_B T c_1^b ⋆, where the prime denotes the derivative with respect to the bulk density argument. The excess free energy density ψ_b^⋆(ρ_b) follows from ordinary numerical integration across bulk densities up to the target value ρ_b. The numerical accuracy to which both routes coincide serves as a further valuable consistency test. Additionally, one obtains the bulk pressure P(ρ_b) from the excess free energy density via P(ρ_b) = ( ψ_b^'(ρ_b) + k_B T ) ρ_b - ψ_b(ρ_b). The pressure is equally accessible from a further route which incorporates previous results for the bulk pair structure via their low-wavelength limits according to <cit.> β. ∂ P/∂ρ_b|_T = β/ρ_b χ_T = 1 - ρ_b c̃_2^b(0) = 1/1 + ρ_b h̃(0) = 1/S(0), where one can identify the isothermal compressibility χ_T = ρ_b (∂ρ_b / ∂ P)_T. From Eq. (<ref>), P(ρ_b) is obtained by evaluation of either of the bulk correlation functions (see Sec. <ref>) in Fourier space at k = 0 for different bulk densities and by subsequent numerical integration towards the target value of ρ_b. We compare the results in Fig. <ref>, where the equation of state P^⋆(ρ_b) of the neural network was acquired from functional line integration across bulk systems, cf. Eq. (<ref>), from evaluation of one-body bulk correlation values c_1^b ⋆, cf. Eq. (<ref>), and from the low-wavelength limit of two-body bulk correlations, cf. Eq. (<ref>). One finds that the results of all three routes are consistent with each other and that they match very well the highly accurate Carnahan-Starling equation of state <cit.>. A slight deviation can be noticed when evaluating P^⋆(ρ_b) via Eq. (<ref>), which constitutes the most indirect route detouring to two-body correlations. This may reflect the small discrepancy of the neural functional to simulation results (cf. Fig. <ref>) and the sensitivity of the low-wavelength limit of the static structure factor to remaining finite size effects <cit.>. As already observed for the bulk pair structure in Sec. <ref>, the neural network also clearly outperforms the Percus-Yevick theory for the bulk fluid equation of state. We recall again that neither bulk information nor data for free energies or pressures was given explicitly in the training of the neural network. Additionally, we demonstrate in Appendix <ref> that the neural functional is fit for the application of dimensional crossover <cit.> in order to obtain the bulk equation of state for the two-dimensional hard disk fluid within a reasonable range of packing fractions. For a concise comparison of free energies in inhomogeneous situations, additional reference data has to be acquired from simulations. In our grand canonical setting, thermodynamic integration <cit.> with respect to the chemical potential can be used to measure the grand potential according to Ω[ρ] = - ∫_-∞^μμ' ⟨ N ⟩. Here, the integration starts from an empty system with Ω[0] = 0 and traverses the chemical potential up to the target value μ. One needs to measure the mean number of particles ⟨ N ⟩ in a sufficient number of simulations with intermediate chemical potentials -∞ < μ' ≤μ to evaluate Eq. (<ref>) numerically. The excess free energy then follows directly from [ρ] = Ω[ρ] - k_B T ∫r⃗ρ(r⃗) (lnρ(r⃗) - 1) + ∫r⃗ρ(r⃗) ((r⃗) - μ). Thermodynamic integration according to Eq. (<ref>) has been performed for 22 systems of the test set to yield reference values ^sim for the excess free energy via Eq. (<ref>). The systems were selected to cover a broad range of excess free energy values, and FMT results for were used as a further theoretical estimate for this selection. In Tab. <ref> and Fig. <ref>, we show errors of to the quasi-exact simulation values when calculating the excess free energy via Rosenfeld and White Bear MkII FMT as well as from functional line integration according to Eq. (<ref>) of the neural functional. For both FMT methods, a DFT minimization (cf. Sec. <ref>) is performed to yield a self-consistent density profile ρ(x), which serves as input to the respective analytic FMT expression for [ρ]. Hence we compare consistently equilibrium states (according to the respective theory) corresponding to the same form of the external potential. The comparison reveals that the neural functional significantly outperforms Rosenfeld FMT and still yields slightly more accurate values for the excess free energy than the very reliable White Bear theory. Regarding the above described bulk results for the free energy, this behavior is both consistent and expected, as the Rosenfeld and White Bear MkII functionals can be associated with the Percus-Yevick compressibility and Carnahan-Starling bulk equations of state respectively. Still, the test in inhomogeneous systems is a more rigorous one than in bulk, as the full nonlocal functional representation is invoked when providing c_1^⋆(x; [ρ]) with an inhomogeneous density profile as input. Given that the functional line integration of c_1^⋆(x; [ρ]) via Eq. (<ref>) is practically immediate, one can deem ^⋆[ρ] itself a corresponding neural functional for the excess free energy that enables a full description of the thermodynamics of inhomogeneous fluids to high accuracy. As we present below, this quantitative precision is preserved when applying the neural functional in a predictive manner in the self-consistent calculation of density profiles. § PREDICTING INHOMOGENEOUS FLUIDS VIA NEURAL DFT §.§ Going beyond analytic approximations In the previous section, the trained model has been put to test by deriving related quantities such as c_2^⋆(x, x'; [ρ]) from autodifferentiation and ^⋆[ρ] from functional line integration in order to assess its performance against analytic and numerical reference results. We now turn to the application of the neural functional c_1^⋆(x; [ρ]) in the context of the self-consistent determination of density profiles according to the DFT Euler-Lagrange equation. This is achieved by rearranging Eq. (<ref>) to the standard form <cit.> ρ(r⃗) = exp(-β ((r⃗) - μ) + c_1(r⃗; [ρ])). A fixed-point (Picard) iteration with mixing parameter α can be used to determine the density profile from Eq. (<ref>) according to ρ(r⃗) ← (1 - α) ρ(r⃗) + αexp(-β ((r⃗) - μ) + c_1(r⃗; [ρ])). The degree of convergence is determined from the remaining difference of right and left hand side of Eq. (<ref>). With the trained neural functional at hand, one can evaluate the one-body direct correlation function in Eq. (<ref>) via the surrogate c_1^⋆(x; [ρ]) in each iteration step. In the following, the use of c_1^⋆(x; [ρ]) in this context will be referred to as neural DFT. We note two minor technical points concerning the use of the neural functional in the Picard iteration. It was observed that a conservative choice of α is necessary during the first few iterations to ensure numerical stability. After this burn-in, the mixing parameter can be set to usual values (e.g. α = 0.05). Furthermore, the convergence criterion has to be relaxed as compared to typical choices in analytic DFT methods due to the remaining intrinsic uncertainty of c_1^⋆(x; [ρ]). The mean average error after training, cf. Sec. <ref>, provides an estimate for the expected relative uncertainty of the density profile according to Eq. (<ref>). Depending on the specific problem, the error might not decrease any further than that during the iteration (<ref>). Neither of these points caused any practical hinderance in applications. The treatment of Eq. (<ref>) in neural DFT is conceptually not different than in standard DFT methods. However, the model c_1^⋆(x; [ρ]) relieves the theory from being restricted by the available approximations for the one-body direct correlation function as generated from analytic expressions of the excess free energy functional [ρ] via Eq. (<ref>). We emphasize that, unlike in previous work <cit.>, no analytic ansatz had to be provided and that our method is generic for the determination of a suitable functional from a given model Hamiltonian, thus indeed constituting a “machine learning black box” <cit.> regarding the training procedure. However, in contrast to a closed black box, the inner workings of the resulting neural correlation functional can be inspected very thoroughly via the neural functional calculus laid out above. Also note that, while the model works at the level of the one-body direct correlation function, the free energy is readily available from functional line integration, cf. Sec. <ref>. Lastly, we point out that c_1^⋆(x; [ρ]) captures the entirety of the intrinsic correlations and that further improvements are conceivable by only learning differences to an analytic reference functional. To demonstrate the capabilities of our method, we refrain from this route and show that the trained neural functional alone already exceeds the accuracy of FMT. §.§ Comparison to FMT In the following, we benchmark the self-consistent inhomogeneous density profiles obtained via neural DFT against FMT results. For this comparison, the Rosenfeld <cit.> and White Bear MkII <cit.> FMT functionals are considered and the simulated density profiles are taken as quasi-exact reference data. The FMT functionals are the most profound analytic description of the hard sphere fluid with the White Bear MkII theory being the state-of-the-art treatment of short-ranged intermolecular repulsion in classical DFT. Nevertheless, measurable and systematic deficiencies still remain, e.g. in highly correlated systems <cit.>. We point the reader to Ref. Roth2010 for a thorough account of FMT and to Ref. Sammueller2023 for a very recent quantitative assessment. Note that the tensorial weights of <cit.> to describe hard sphere freezing are not included in our investigation. The comparison is set up as follows. For each hard sphere system of the test set (see Sec. <ref>), we determine the density profile ρ(x) from the Rosenfeld and White Bear MkII FMT functionals as well as from c_1^⋆(x; [ρ]) via the Picard iteration (<ref>) of the Euler-Lagrange Eq. (<ref>). For this, only the known form of the external potential (x) and the value μ of the chemical potential are prescribed. As reference density profiles are available from GCMC simulations, we can evaluate the error Δρ(x) of each of the DFT results relative to the simulation data for ρ(x). From here, different scalar metrics for the quantitative agreement of self-consistent DFT profiles and simulation results are considered. In Fig. <ref>, both global and local error measures for the deviation of FMT as well as neural DFT to simulation data are depicted. For the assessment of the global error, we show the L_2-norm ‖Δρ‖_2 of the discrepancy to the reference profile, which is normalized by the mean density ρ̅ of each system respectively. As the test data covers very dilute to very dense systems, this relative global error measure is plotted as a function of ρ̅ to discern the behavior with respect to varying global average density. Similarly, we define an upper estimate for the relative local error by evaluating the maximum norm ‖Δρ‖_∞ of the density deviation divided by the maximum value ‖ρ‖_∞ of the GCMC density profile. This quantity is resolved against the maximum ‖ρ‖_∞ of the respective inhomogeneous density, thus enabling the detection of local discrepancies, e.g. in the vicinity of maxima and discontinuities of the density profile. One recognizes that neural DFT yields substantially better results than the FMT functionals with regard to both error measures. Compared to the Rosenfeld results, both the global and the local error is decreased by approximately an order of magnitude. Surprisingly, even the White-Bear MKII functional is not able to match the accuracy of the neural DFT, which is noticeable especially for large values of ρ̅ and of ‖ρ‖_∞. §.§ Simulation beyond the box A particular advantage of the local nature of the neural functional c_1^⋆(x; [ρ]) is its applicability to systems of virtually arbitrary size. As explained in Sec. <ref>, it is sufficient to provide the density profile within a rather narrow window as input to the neural network to infer the value of the one-body direct correlation function at the center of the density section. The model c_1^⋆(x; [ρ]) can therefore be used directly in the Euler-Lagrange Eq. (<ref>) for the prediction of planar systems of arbitrary length. Due to the low computational demands of solving this equation self-consistently, this method is suitable even in multiscale problems where macroscopic length scales compete with and are influenced by microscopic correlations and packing features. Although one could argue that analytic DFT methods already account for such tasks, importantly the neural functional c_1^⋆(x; [ρ]) acts as a drop-in replica of the (almost) simulation-like description of the intrinsic correlations. Therefore, neural DFT facilitates to fuse simulation data with common DFT methods, thus providing a means to “simulate beyond the box”. Simulation beyond the box is demonstrated in Fig. <ref>, where a system with a length of 1000 σ is considered; the numerical grid size remains unchanged at 0.01 σ. Our setup implies that for colloids of, say, size σ = 1, we have spatial resolution of 10 across the entirety of a system of macroscopic size 1. As a demonstration, similar to the strategy in Sec. <ref>, a sequence of spatially connected randomized potentials is generated, and the chemical potential is set here to μ = 0. Using c_1^⋆(x; [ρ]), we obtain the corresponding density profile straightforwardly from the simple iteration scheme (<ref>). The computational cost for the determination of ρ(x) with neural DFT is negligible as compared to an analogous many-body simulation, which is hardly feasible on such length scales. A second example, which is arguably more relevant from a physical point of view <cit.>, is given in Fig. <ref>, where we show the sedimentation behavior of the hard sphere fluid as obtained with neural DFT. For this, a local chemical potential μ_loc(z) = μ - (z) that decreases linearly with respect to the height z is imposed in a system which is bounded from the bottom (z = 0) and the top (z = 1000 σ) by hard walls. The spatial variation of μ_loc(z) is chosen small enough to enable thermal diffusion across the whole sedimentation column and to yield locally an almost bulk-like behavior except near the upper and lower hard walls. The method reproduces both the highly correlated nature of ρ(z) in the vicinity of the walls as well as its intermediate behavior within the sedimentation column, which follows closely the bulk equation of state (see Sec. <ref>), as one would expect within a local density approximation <cit.>. § CONCLUSION AND OUTLOOK In this work, we have outlined and validated a machine learning procedure for representing the local functional map from the density profile to the one-body direct correlation function via a neural network. The resulting neural functional was shown to be applicable as a powerful surrogate in the description of inhomogeneous equilibrium fluids. This was demonstrated for the hard sphere fluid, where we have used GCMC simulations in randomized inhomogeneous planar environments for the generation of training, validation and test data. Density and one-body direct correlation profiles followed respectively from direct sampling and from evaluation of Eq. (<ref>). DFT elevates the role of the one-body direct correlation function c_1(x) to that of an intrinsic functional c_1(x; [ρ]) depending on the density profile ρ(x) but being independent of the external potential. We exploited this fact in the construction of our neural network, which takes as input a local section of the discretized density profile around a fixed location x and outputs the value of the one-body direct correlation functional c_1(x; [ρ]) at that specific location. Establishing a pointwise inference of c_1(x; [ρ]) instead of trying to represent the global functional mapping of the entire one-body profiles comes with various advantages, such as independence of the box size, the correct description of the short-range behavior of c_1(x; [ρ]), and a very significant improvement of training statistics. The nonlinear and nonlocal functional relationship was realized by fully-connected hidden layers with smooth activation functions and a standard supervised training routine was used. The achieved mean average error over the test set was of the same order of magnitude as the noise floor of the simulations, thus being indicative of full representational power of the neural correlation functional within the considered simulation data. Whether the quality of the model can be improved further by performing more extensive sampling to reduce the statistical noise of the simulation profiles remains to be investigated in the future. Additionally, active and reinforcement machine learning techniques could be useful for interleaving the training and simulation process, thereby guiding the generation of reference data in order to explore the space of inhomogeneous systems more efficiently and exhaustively. The neural functional was put to test by verifying numerous physical relations in bulk and in inhomogeneous systems. In particular, it was shown that the two-body direct correlation functional c_2(x, x'; [ρ]) as well as higher-order correlations are accessible from the model via automatic differentiation. In bulk, the pair structure as described by the neural network significantly outperforms the Percus-Yevick theory and is even able to compete with simulation results <cit.>, although no bulk data was used during training. In inhomogeneous situations, the conformance of the neural functional to the thermal Noether sum rules (<ref>) and (<ref>) as well as to spatial symmetry requirements holds to high accuracy. The excess free energy [ρ] is readily and efficiently available via functional line integration of the model according to Eq. (<ref>) and the results agree with those obtained from simulations. The bulk equation of state can be acquired consistently from various routes and its accuracy is comparable to the Carnahan-Starling result. Dimensional crossover is feasible for the calculation of the bulk equation of state for the two-dimensional hard disk system. Arguably the most important consequence of the neural functional framework is the applicability of c_1^⋆(x; [ρ]) in the self-consistent calculation of density profiles by solving the Euler-Lagrange Eq. (<ref>) of classical DFT. As the one-body direct correlation function is faithfully represented by the neural network, one is exempted from having to find analytic approximations for c_1(x; [ρ]) or for its generating functional [ρ]. Although FMT provides such approximations for the hard sphere fluid with high precision, we could demonstrate that our neural functional outperforms both the Rosenfeld <cit.> as well as the White Bear MkII <cit.> functional. For this, Eq. (<ref>) was solved self-consistently for all 150 randomized local chemical potentials of the test set to obtain ρ(x), where c_1(x; [ρ]) was given either analytically by FMT or evaluated via c_1^⋆(x; [ρ]). The comparison of the results to the simulated density profiles reveals that neural DFT yields global and local errors that are up to an order of magnitude lower than those of FMT. Furthermore, due to the flexibility that comes with the local functional mapping, the neural network could be used as a means to “simulate beyond the box”. That is, while the training was based solely on simulation data from systems of manageable size, the resulting model c_1^⋆(x; [ρ]) is directly applicable for predictions on much larger length scales. We demonstrated this by imposing a spatial sequence of randomized external potentials on a length of 1000 σ. While the explicit numerical simulation of such a system is comparatively cumbersome, neural DFT offers a way to achieve close to simulation-like accuracy at low computational effort. Furthermore, we have considered a sedimentation column with a height of 1000 σ that is bounded by hard walls. Neural DFT is capable to both resolve microscopically the adsorption at the walls as well as to efficiently capture the long-range density decay with increasing height. The presented fusion of machine learning and DFT can therefore be another useful technique to make headway in the multiscale description of soft matter <cit.>. On the opposite side of the length scale spectrum, it might also be worthwile to consider quantum mechanical approaches, either in the context of ab initio simulation methods for the generation of training data or for cross-fertilization of machine learning ideas, in particular regarding topical applications in quantum DFT <cit.>. While much insight could be gained by considering the well-studied hard sphere fluid, the application of our machine learning procedure is arguably even more useful for particle models that lack satisfactory analytic density functional approximations. Although mean-field descriptions account surprisingly well for soft and attractive contributions <cit.>, e.g. in the Lennard-Jones fluid, analytic efforts to go beyond this approximation are sparse <cit.>. In the future, the application of neural DFT to such thermal systems may prove to be useful either via isothermal training or by providing the temperature as a further input quantity. We expect the general method to hold up even for complex particle models, e.g. containing many-body contributions <cit.> or orientational degrees of freedom as treated within molecular DFT <cit.>, provided that sufficiently accurate training data of sufficient quantity can be generated. A proper treatment of the arising phase transitions and interfacial phenomena might be subtle both in simulation as well as from a machine learning perspective. Even though we saw no need for a more sophisticated training procedure in our investigations, it could be useful to consider physics-informed machine learning <cit.> as a technique for enforcing exact physical relations of the underlying problem directly during training. Sum rules in bulk or in inhomogeneous systems, e.g. the thermal Noether identities (<ref>) and (<ref>), might be suitable candidates for this task. Analogous to the evaluation of derivatives in physics-informed neural networks, we have shown the necessary quantities to be accessible by automatic differentiation of the neural functional. When considering nonequilibrium systems, power functional theory (PFT) <cit.> establishes an exact functional many-body framework which is analogous to that of DFT in equilibrium. A central ramification of PFT is the existence of a functional map from the time-dependent one-body density ρ(r⃗, t) and current J⃗(r⃗, t) to the internal force profile f⃗_int(r⃗, t; [ρ, J⃗]), which is in general nonlocal in space and causal in time t. Recent work by <cit.> demonstrated that machine learning this kinematic internal force functional yields highly promising results and overcomes the analytic and conceptual limitations of dynamical density functional theory. In this regard, our method can be put into a more general context as it may be viewed as a mere special case for equilibrium systems where J⃗(r⃗, t) = 0. The topical problem of accurately describing nonequilibrium many-body physics is certainly a natural contender for the application and extension of our neural functional framework, with many practical questions arising, e.g. concerning the generation of training data or the choice of neural network architecture. Lastly, the possibility of extending the machine learning procedure from planar symmetry to more general two-dimensional geometries or even to the full three-dimensional problem is worth contemplating. Especially for the latter, the amount of required training data seems restrictive at first if one considers randomized simulations in the fully inhomogeneous geometry. However, results obtained in the planar case could be leveraged since they already capture the crux of the internal interactions, as was shown in this work. Therefore, it may be possible to supplement the planar data with only a few select higher-dimensional simulations to incorporate the remaining nontrivial effects due to the more general geometry. As data-efficiency will be vital in this case, one might benefit from more extensive data augmentation, and the use of equivariant neural networks <cit.> could provide a way of casting certain symmetries directly into the model architecture. We thank T. Zimmermann, T. Eckert and N. C. X. Stuhlmüller for useful comments. This work is supported by the German Research Foundation (DFG) via Project No. 436306241. § HIGHER-ORDER CORRELATIONS Analogous to Sec. <ref>, we demonstrate that higher-order correlations can be obtained from the neural correlation functional by nested automatic differentiation. This is due to the fact that the hierarchy of direct correlation functions c_n(r⃗, r⃗', …, r⃗^(n); [ρ]), n ≥ 2, is accessible from successive functional derivatives of the one-body direct correlation functional <cit.>, c_n(r⃗, r⃗', …, r⃗^(n-1); [ρ]) = δ^n-1 c_1(r⃗; [ρ])/δρ(r⃗') …δρ(r⃗^(n-1)). As illustrated in the main text, translational symmetry can be applied in bulk fluids such that the resulting bulk correlation function c_n^b(r⃗, …, r⃗^(n-2)) = c_n(0, r⃗, …, r⃗^(n-2); [ρ_b]) only incorporates n - 2 remaining position coordinates. We specialize again to the planar geometry of our neural functional and show in Fig. <ref> the three-body bulk correlation function c̅_3^b ⋆(x, x') for a bulk density of ρ_b = 0.7 σ^-3. While the computation of c̅_2^b ⋆(x) is practically immediate via a single reverse mode autodifferentiation pass, going to the three-body correlation function comes at the price of having to evaluate the Hessian of c_1^⋆(x; [ρ]), for which different strategies exist <cit.>. In principle, one can proceed by nesting autodifferentiation layers to obtain further members of the hierarchy (<ref>), albeit being restricted by the practicability of the actual evaluation and the efficacy of the result. Note that the computational effort at the three-body level is by no means restrictive and that growing numerical demands are expected when considering higher-order correlations. The computation and analysis of c̅_3^b(x, x') might be especially useful for more complex fluid models, e.g. containing internal three-body interactions <cit.>. We compare c̅_3^b ⋆(x, x') to analytic approximations based on FMT. For both the Rosenfeld and the White Bear MkII functional, the three-body bulk direct correlation function is analytic in Fourier space. We point the reader to Ref. Rosenfeld1989 for an expression of the original Rosenfeld result in terms of vectorial weight functions and to Refs. Kierlik1990,Phan1993 for an equivalent representation via scalar weights. As the weight functions remain unchanged, the White Bear MkII result follows immediately from the modification of the excess free energy density as laid out in Ref. HansenGoos2006. A cumulant expansion of the bulk result of the three-body direct correlation function in Fourier space can be transformed to real space analytically, which in planar geometry gives c̅_3^b(x, x') = - b R^4/aexp(-x^2 + x x' - x'^2/a R^2), where the width parameter a and the prefactor b are determined by a = ν/κ3/553 - 25 η + 8 η^2/30 + 2 η + 5 η^2 - η^3, b = κ8 π/3 √(3)30 + 2 η + 5 η^2 - η^3/(1 - η)^5, with the packing fraction η = πρ_b / 6. The correction factors ν and κ are set to unity in the Rosenfeld FMT and attain the forms ν = 53 - 35 η + η^2 + 5 η^3/53 - 25 η + 8 η^2, κ = 30 - 6 η/30 + 2 η + 5 η^2 - η^3, in the White Bear MkII case. The comparison reveals that the form of the neural three-body bulk correlation function c̅_3^b ⋆(x, x') is plausible and that it captures genuine features which go beyond both FMT descriptions. The Rosenfeld FMT yields a large discrepancy in the core region x, x' ≈ 0, which is significantly unterestimated as compared to the results from the neural functional and from the White Bear theory. We recall that, as in Sec. <ref>, the tensorial weights of <cit.> have not been used in the FMT functionals and that their inclusion might be particularly relevant on the level of higher-order correlations. In this vein, investigating members of the direct correlation hierarchy (<ref>) with the neural correlation functional could be a valuable aid for testing and refining analytic FMT functionals. § SPATIAL SYMMETRY OF THE NEURAL TWO-BODY DIRECT CORRELATION FUNCTIONAL A further consistency test of c_2^⋆(x, x'; [ρ]) arises due to its expected symmetry with respect to an interchange of the planar position coordinates x and x'. Recall that the excess free energy functional [ρ] generates the two-body direct correlation function according to c_2(r⃗, r⃗'; [ρ]) = - δ^2 β[ρ]/δρ(r⃗) δρ(r⃗'), see Eqs. (<ref>) and (<ref>). One can directly recognize from the symmetry of the second functional derivative in Eq. (<ref>) that c_2(r⃗, r⃗'; [ρ]) = c_2(r⃗', r⃗; [ρ]) must hold. On the basis of the neural direct correlation functional in planar geometry, assessing the validity of the identity c_2^⋆(x, x'; [ρ]) = c_2^⋆(x', x; [ρ]) is a highly nontrivial test. This is due to the fact that c_2^⋆(x, x'; [ρ]) evaluated at certain positions x and x' follows from automatic differentiation of c_1^⋆(x; [ρ]), where the input density window is centered around the location x, see Sec. <ref>. On the other hand, when formally evaluating c_2^⋆(x', x; [ρ]), where the arguments x and x' are now reversed, the density window is centered around x', hence constituting a generally very different and a priori unrelated input profile. One can expect Eq. (<ref>) to be recovered only if the physical implications of Eq. (<ref>) are captured correctly by the neural functional. Note that Eq. (<ref>) is a necessary condition for the existence of a unique neural excess free energy functional ^⋆[ρ], which can practically be obtained via functional line integration, see Sec. <ref>. We exemplify in Fig. <ref> that the neural two-body direct correlation functional c_2^⋆(x, x'; [ρ]) obtained via autodifferentiation of c_1^⋆(x; [ρ]) indeed satisfies the symmetry requirement (<ref>) to very high accuracy. § NEURAL EQUATION OF STATE FOR HARD DISKS VIA DIMENSIONAL CROSSOVER Although the neural functional c_1^⋆(x; [ρ]) was acquired explicitly for the three-dimensional hard sphere fluid, one can use dimensional crossover techniques to obtain bulk results for the two-dimensional hard disk system. This is facilitated by investigating the behavior of the hard sphere fluid under narrow confinement, which constitutes a quasi-two-dimensional scenario. With this method, one obtains the equation of state for the hard disk fluid from c_1^⋆(x; [ρ]), as we demonstrate in the following. We proceed similar to Sec. <ref> and utilize Eq. (<ref>) to express the pressure P(ρ_b) via the excess free energy density ψ_b(ρ_b), which we aim to compute for a range of bulk densities ρ_b. Whereas c_1^⋆(x; [ρ]) was evaluated for the three-dimensional bulk fluid at spatially constant density, cf. Eq. (<ref>), here a suitable density profile ρ_2D(x) is constructed as input to the neural direct correlation functional in order to emulate narrow planar confinement. For this, we choose ρ_2D(x) = ρ_b/x_wΘ(|x - x_w/2|) with the Heaviside function Θ(·); note that Eq. (<ref>) is a Dirac series and yields the Dirac distribution for x_w → 0. The neural direct correlation functional is then evaluated at the center of this assumed slit, and the values c_1^⋆(0; [ρ_2D]) are used analogous to Sec. <ref> for the determination of P_2D^⋆(ρ_b). The equation of state for the associated two-dimensional hard disk system follows formally for x_w → 0. As this limit is not directly accessible in practice, we assess the obtained values for finite but small slit widths 0.3 ≤ x_w / σ≤ 1 and extrapolate to x_w = 0 via a quadratic fit. The resulting equation of state P_2D^⋆(ρ_b) for the two-dimensional hard disk fluid as obtained from this dimensional crossover on the basis of the neural network is shown in Fig. <ref>. We additionally display analytic equations of state from scaled particle theory <cit.> and by <cit.> which serve as reference. One recognizes that reasonable results can be achieved for low and medium densities, but that deviations to analytic results become noticeable for ρ_b > 0.7 σ^-2. Nevertheless, it is both surprising and reassuring that the neural functional is capable of predicting correlations in narrow confinement, as no such situations were explicitly included in the training data. Recall that hard walls were imposed only at the borders of the simulation box of length L = 20 σ and that the inhomogeneous external potential within the simulation domain consisted solely of Fourier modes and of piecewise linear functions, see Eq. (<ref>). Presumably, improvements over the results presented in Fig. <ref> could be obtained especially for large densities by including situations of very narrow confinement explicitly in the training data. From our outset, the successful achievement of a viable two-dimensional equation of state serves as a demonstration that c_1^⋆(x; [ρ]) indeed captures the intricate functional relationship of the underlying physical problem instead of acting as a mere interpolation tool with respect to the encountered training data.
http://arxiv.org/abs/2307.06084v1
20230712111425
Neuromorphic analog circuits for robust on-chip always-on learning in spiking neural networks
[ "Arianna Rubino", "Matteo Cartiglia", "Melika Payvand", "Giacomo Indiveri" ]
cs.NE
[ "cs.NE" ]
Neuromorphic analog circuits for robust on-chip always-on learning in spiking neural networks Arianna Rubino1, Matteo Cartiglia1, Melika Payvand, Giacomo Indiveri Institute of Neuroinformatics, University of Zurich and ETH Zurich Email: [giacomo|rubinoa|camatteo]@ini.uzh.ch 1 These authors have contributed equally to this work. This work has received funding from European Union's Horizon 2020 research and innovation program under grant agreement No 871737 (“BeferroSynaptic”), from the European Research Council (ERC) under grant agreement No 724295 (“NeuroAgents”), the Swiss National Science Foundation Sinergia project CRSII5-18O316, and the UZH Candoc fellowship FK-22-084. Received: 2 February 2023 / Accepted: 4 July 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Mixed-signal neuromorphic systems represent a promising solution for solving extreme-edge computing tasks without relying on external computing resources. Their spiking neural network circuits are optimized for processing sensory data on-line in continuous-time. However, their low precision and high variability can severely limit their performance. To address this issue and improve their robustness to inhomogeneities and noise in both their internal state variables and external input signals, we designed on-chip learning circuits with short-term analog dynamics and long-term tristate discretization mechanisms. An additional hysteretic stop-learning mechanism is included to improve stability and automatically disable weight updates when necessary, to enable continuous always-on learning. We designed a spiking neural network with these learning circuits in a prototype chip using a 180 nm CMOS technology. Simulation and silicon measurement results from the prototype chip are presented. These circuits enable the construction of large-scale spiking neural networks with online learning capabilities for real-world edge computing tasks. always-on learning, edge-computing, on-chip learning online, SNN, hysteresis, tristability. § INTRODUCTION The requirements of artificial intelligence (AI) systems operating at the edge are similar to those that living organisms face to function in daily life. They need to measure sensory signals in real-time, perform closed-loop interactions with their surroundings, be energy-efficient, and continuously adapt to changes in the environment and in their own internal state. These requisites are well supported by neuromorphic systems and emerging memory technologies that implement brain-inspired mixed-signal spiking neural network (SNN) architectures <cit.>. These types of SNNs operate in a data-driven manner, with an event-based representation that is typically sparse in both space and time. Since they compute only when data is present, they are very power efficient. Similar to the biological neural systems they model, these SNNs are particularly well-suited to processing real-world signals. They can be designed to operate at the same data-rate of the input streams in real-time by matching the time constants of neural computation with those of the incoming signal dynamics. However, similar to their biological counterparts, these systems are affected by a high degree of variability and sensitivity to noise. One of the most effective strategies that biology uses to cope with noise and variability is to utilize adaptation and plasticity. This strategy has also been adopted by the neuromorphic community: several on-chip implementations of spike-based learning circuits have been proposed in the past <cit.>. However, few have addressed the problem of being able to operate robustly and autonomously in continuous time, with the ability to switch automatically and reliably between learning and inference modes. Following the original neuromorphic engineering approach <cit.>, we propose a set of analog circuits that faithfully emulate synaptic plasticity mechanisms observed in pyramidal cells of cortical circuits and implement complex spike-based learning and state-dependent mechanisms that support this functionality. In addition, we extend the concept of long-term bi-stability of synaptic weights, proposed to increase robustness to noise and variability in the input signals <cit.>, to a tristate stability and weight discretization circuit that increases the resolution of the (stable and crystallized) synaptic weights. The synaptic plasticity circuits update an internal state variable of the synapse on every pre-synaptic input spike. The change in this state variable is computed in continuous time by the soma block of the neuron. In parallel, depending on its value, the internal variable is driven to one of three possible stable states and converted to a discrete three-state synaptic weight current value. The post-synaptic learning circuits comprise an additional mechanism that gates the weight changes, to stop the learning process when the neuron's mean firing rate is outside a defined learning window. The circuits were fabricated on a prototype SNN chip designed in a 180 nm 6M1P CMOS technology and tested within a network of 4 neurons with 64 synapses each (see Fig. <ref>). In the following sections we describe the main building blocks used at both the synapse and neuron level, demonstrate their expected behavior with circuit simulations, and provide experimental results measured from the chip. § NETWORK ARCHITECTURE The block diagram of each neuron in the network is shown in Fig. <ref>. Input digital events (_) arrive at the individual synapses via asynchronous logic <cit.> and trigger local weight update circuits to induce a change in the voltage stored on a local capacitor by an amount determined by the post-synaptic learning circuits. In parallel, a tristate stability circuit drives this internal voltage to one of three possible stable states. This local internal voltage is then discretized and converted to a low, intermediate or high current value. All currents produced by all synapses are summed spatially and conveyed to a differential pair integrator (DPI), which integrates the weighted sum over time <cit.>. A parallel and analogous pathway receives input events representing a desired target signal (_), and produces a corresponding current from its dedicated DPI circuit. The target and input currents are both summed to drive the neuron's post-synaptic Integrate & Fire (I&F) circuit <cit.>, and subtracted to drive the soma's Delta rule circuit <cit.>. The Delta rule circuit produces either positive or negative weight update signals proportional to the difference between the target input and the weighted synaptic input. These signals are broadcast to all the neuron's input synapses in continuous time if learning is enabled. Learning is enabled (or disabled) by means of two hysteretic Winner-Take-All (hWTA) circuits <cit.> that compare the neuron's mean output firing rate to a low and a high threshold (see Section <ref> for details). § CIRCUITS As the details of the Delta rule and I&F circuits have already been presented <cit.>, we describe the synapse learning circuits and the soma hWTA circuit. §.§ Plastic synapse circuit Figure <ref> presents all of the learning circuits used at the synaptic level. With every pre-synaptic spike, the weight update circuit (Fig. <ref>) increases or decreases the internal analog weight variable _ by an amount determined by the voltages _ and _, produced by the post-synaptic Delta rule circuits. The tri-stability supply voltage circuit (Fig. <ref>), produces the biases that power either of the positive feedback amplifiers of Fig. <ref>, depending on the state of _ with respect to _dd/2. The tri-stability circuit (Fig. <ref>) consists of two slew-rate limited positive feedback amplifiers which slowly drive _ towards ground, _dd/2, or _dd depending on the value of _ relative to _ and _. The weight discretization circuit (Fig. <ref>) sets the value of the effective synaptic current _ to _, _, or 2_ depending on the state of _ with respect to _ and _. §.§ Hysteretic WTA for “stop-learning” Figure <ref> shows an instance of a hWTA circuit: it consists of two identical cells, (M_2–M_6) and (M_7–M_11) that compete with each other. As soon as one cell wins (e.g., the left one), the bias current _ is copied and added to the input current of the winning branch (e.g., _). This creates a hysteresis window, such that for the winning (left) cell to lose the competition, its input current has to decrease below the input current of the opposite branch by an additional factor equal to the bias current (_<_ - _). The output voltage of this circuit _ switches to “high” when the left cell wins, and to “low” when the right cell becomes the winner. To implement the “stop-learning” mechanism <cit.>, we produce a current _ (a surrogate of the neuron's calcium concentration) by integrating the post-synaptic neuron spikes with a DPI circuit <cit.>. We then compare this current to two thresholds with the two hWTA circuits. The digital output nodes of the two hWTA circuits were connected to logic gates to produce an active high signal when the _ current is within the set bounds (i.e., within the learning region) and a low when it is outside this region. This signal is then used as a “third factor” to enable or disable the Delta rule weight circuit, and switch on or off the weight updates. The hysteresis windows of the hWTA circuits are used to distinguish between cases in which the target input is present (to enable learning) or absent (to disable learning and automatically switch to an “inference” mode). The effect of this window is described in Section <ref>. § RESULTS We validate the learning circuits with both circuit simulations and with experimental results measured from the fabricated chip. §.§ Circuit simulation results Here, we show simulations of a single neuron and 40 plastic synapses during a learning task and show how the hWTA enables automatic switching from learning to inference. After initializing all synaptic weights to zero, we started a training phase by stimulating each plastic synapse with a 25 Hz input spike train, and by sending a spike train with a 1 kHz frequency to the target synapse. As expected, during this training phase, the weights of the synapses potentiated and the total weighted synaptic input current increased (see the red trace of Fig. <ref>). During the inference phase, we removed the target input spike train while keeping on stimulating the input synapses. As expected, without this extra input, the average mean firing rate of the neuron decreased, and the calcium concentration current fell below the upper bound of the learning region. Figure <ref> shows this task performed with two values for _, which governs the width of the hysteresis window. Without a proper hysteresis window (Fig. <ref>), when the neuron falls back into a learning region it “forgets” its training (i.e., the learning circuits decrease the weights). On the other hand, by properly tuning the hysteresis window (Fig. <ref>), the network remains in a “stop-learning” mode, and the neuron retains a high output firing rate response to the trained pattern, even in absence of a target signal. In the larger hysteresis window case, the total estimated power consumption is 1.07 uW, and a maximum (mean) energy of 740 pJ (680 pJ) is required to update the weights. §.§ Chip measurement results §.§.§ Tristability The results from the measurements of the plastic synapse circuits are shown in Fig. <ref>. Initially, the neuron is presented with a high target activity, triggering large positive weight updates and causing a rapid increase in the synapse weight internal variable. Upon removal of the target, the weight is decreased. By increasing the power to the tristate stability amplifiers (Fig. <ref>), i.e., by increasing _ of Fig. <ref>, the circuit opposes the weight changes more strongly and drives _ to one of the three stable states more quickly. Stability at _dd and ground is shown in Fig. <ref> and at _dd/2 in Fig. <ref>. Once the stimulation ends, the tristability circuit crystalizes the weight to one of the three stable states depending on the value of _. §.§.§ Hysteresis for “stop-learning” Figure <ref> shows the results of the characterization of the hysteretic calcium-based stop-learning mechanism. Similarly to the previous experiment, the neuron is initially stimulated with a high target activity, bringing it to the learning region. The plastic synapse weight rapidly increases, pushing the neuron into the “stop-learning” region. Once the target activity is removed, the neuron returns to the learning region, and, for small hysteresis window settings (top blue plot in Fig. <ref>), the plastic synapse decreases its weight as it is stimulated. For higher values of _ the hysteresis window increases (orange plot in Fig. <ref>) and when the target is removed, the neuron's return to the learning mode is delayed. As this delay increases, even though the plastic synapse keeps on being stimulated, the neuron remains in the “stop-learning” region and the weight remains unchanged (purple plot in Fig. <ref>). § CONCLUSIONS We presented a set of analog circuits that enable learning in mixed-signal neuromorphic SNNs, with tristate stability and weight discretization circuits. By comparing the neuron's calcium concentration to lower and upper bounds, and by using hysteresis, we demonstrate effective always-on learning features, that automatically switch from learning mode to inference mode, without having to manually disable or enable learning. Comparisons to previous efforts are provided in Table <ref>. § ACKNOWLEDGMENT The authors thank Shyam Narayanan, Charlotte Frenkel, and Junren Chen for fruitful discussions and contributions.
http://arxiv.org/abs/2307.04498v1
20230710113918
RCS-based Quasi-Deterministic Ray Tracing for Statistical Channel Modeling
[ "Javad Ebrahimizadeh", "Evgenii Vinogradov", "Guy A. E. Vandenbosch" ]
cs.NI
[ "cs.NI", "cs.SY", "eess.SY" ]
RCS-based Quasi-Deterministic Ray Tracing for Statistical Channel Modeling Javad Ebrahimizadeh, Evgenii Vinogradov, Guy A.E. Vandenbosch J. Ebrahimizadeh and G. Vandenbosch are with WaveCoRE of the Department of Electrical Engineering (ESAT), KU Leuven, Leuven, Belgium. E-mail: {Javad.Ebrahimizade,Guy.Vandenbosch}@kuleuven.be E. Vinogradov is with ESAT, KU Leuven, Leuven, Belgium, also with Autonomous Robotics Research Center, Technology Innovation Institute (TII), Abu Dhabi, UAE. E-mail: [email protected]. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper presents a quasi-deterministic ray tracing (QD-RT) method for analyzing the propagation of electromagnetic waves in street canyons. The method uses a statistical bistatic distribution to model the Radar Cross Section (RCS) of various irregular objects such as cars and pedestrians, instead of relying on exact values as in a deterministic propagation model. The performance of the QD-RT method is evaluated by comparing its generated path loss distributions to those of the deterministic ray tracing (D-RT) model using the Two-sample Cramer-von Mises test. The results indicate that the QD-RT method generates the same path loss distributions as the D-RT model while offering lower complexity. This study suggests that the QD-RT method has the potential to be used for analyzing complicated scenarios such as street canyon scenarios in mmWave wireless communication systems. quasi-deterministic, ray tracing, Radar Cross Section, statistical distribution, EM propagation, Cramer-von Mises test. § INTRODUCTION Wireless communication has been rapidly evolving with the advent of new technologies and the increasing demand for high-speed data transmission. Millimeter-Wave (mmWave) wireless communication is considered a promising technology for the next generation of wireless communication due to its ability to provide multi-Gbps average data rates with low latency <cit.>. This high data rate is particularly necessary for dense urban areas such as the street canyon scenario, where a large number of users demand high-speed data transmission. In this scenario, radio frequencies at mmWave bands are used to transmit data, which requires an understanding of the propagation characteristics of mmWave signals in street canyons. Recently, Facebook introduced an affordable solution for deploying high-speed data access in street canyons using mmWave Terragraph radios operating at 60 GHz for rooftop-to-rooftop or light-pole-to-light-pole links <cit.>. Since there is no closed-form scattering model available for bistatic Radar Cross Section (RCS) of irregular objects such as pedestrians and cars, numerical methods such as the Method of Moments (MoM), Geometrical Optics (GO), Physical Optics (PO), or their combinations, are typically used to calculate the bistatic RCS of these objects. However, this increases the computational complexity of the analysis, which can be especially challenging in the case of the street canyon scenario, where a large number of irregular objects need to be considered. While the use of the bistatic RCS model of a sphere in the METIS channel model is simple, it may not accurately represent the scattering from irregular objects in all directions. This is because a large sphere, relative to the wavelength, exhibits a constant RCS. To address this limitation, Lahuerta-Lavieja et al. developed a fast mmWave scattering model based on the 3D Fresnel model for rectangular surfaces. However, while these models are useful for certain types of objects, they may not accurately model more complex or irregular objects <cit.>. Therefore, further research is needed to develop more accurate bistatic RCS models that can be incorporated into channel models for a more comprehensive analysis of wireless communication systems in real-world scenarios. Myint et al. demonstrated the feasibility of modeling the bistatic RCS of intricate objects using a closed-form statistical distribution function. They found that the bistatic RCS of cars conforms to a logistic distribution and applied this model to various vehicle types, including passenger cars, vans, and trucks, at sub-6 GHz frequency. However, they did not validate this Probability Density Function (PDF) model in a practical channel environment <cit.>. The present paper introduces a low-complexity quasi-deterministic ray tracing method that takes advantage of the statistical distribution of bistatic RCS of irregular objects for calculating scattering instead of its exact values, as done in deterministic ray tracing. The method uses the Physical Optics (PO) method to calculate the bistatic RCS of irregular objects at mmWave and assigns a suitable Probability Density Function (PDF) to them. This approach significantly reduces the complexity of the ray tracing method. The QD-RT method is verified numerically by calculating the path loss due to irregular objects in a realistic street canyon scenario. The main contributions of the paper are: * Development of a quasi-deterministic ray tracing technique based on dedicated PDFs of bistatic RCSs of objects. * Deriving the probability density function of the area coverage for a specific street canyon scenario in spherical coordinates. The rest of the paper is organized as follows. Section <ref> describes the quasi-deterministic ray tracing method. Section <ref> validates the quasi-deterministic propagation technique. Finally, the paper is concluded in Section <ref>. § QUASI-DETERMINISTIC RAY TRACING METHOD In this section, we provide a comprehensive overview of the street canyon topology and the theory of deterministic electromagnetic (EM) propagation in the scenario. Additionally, we outline the quasi-deterministic and statistical channel models used in the study and their corresponding parameterization. §.§ street canyon scenario topology The topology of the street canyon scenario is shown in Figure <ref> with two tall buildings on either side of the street. The street has a length of W_2 and a width of L_1, and there is a sidewalk on both sides of the street with a width of W_1. In this scenario, there are scattering objects such as lampposts, parked cars, and pedestrians placed on the street. The walls of the buildings have a thickness of D_w and are made of bricks with a relative permittivity of ϵ_r,w at operational frequency f_0. The transmitter and receiver antennas are omnidirectional antennas with vertical polarization, and they are located at positions (X_tx, Y_tx, Z_tx) and (X_rx, Y_rx, Z_rx), respectively. The lampposts have a radius of R_l and a length of L_l, and they are equidistantly positioned on both sides of the street with a separation distance of d_l. The scenario dimensions and parameter values are provided in Table <ref>. §.§ deterministic propagation The propagation of Em wave in street canyon scenario includes the Line of Sight (LOS), reflection, and scattering paths without considering shadowing, and diffraction. The LOS, reflection (from walls and ground), and scattering components can be modeled as: §.§.§ LOS propagation H_0(ω)=a_0e^jωτ_0, where | a_0| ^2=(λ/4π r_0)^2 is the LOS propagation loss, the corresponding path loss in dB is PL=-20 log_10(|a_0|), and τ_0=r_0/c_0 is the propagation time. §.§.§ reflections from ground and walls H^r(ω)=a^re^jωτ_r, where | a^r| ^2=(R^TE/TMλ/4π (r_1+r_2))^2, the corresponding path loss in dB due to reflection is PL=-20 log_10(|a^r|), and τ^r=(r_1+r_2)/c_0 is the propagation time; r_1 is the distance between TX to specular point and r_2 is the distance between the specular point to RX. The reflection coefficient R^TE/TM for both TE and TM polarization for dielectric slab (wall) and half-space (Ground) media <cit.>. §.§.§ Scattering from objects H^s(ω)=a^se^jωτ_s, where | a^s| ^2= 1/4π r_1^2×σ_rcs×1/4π r_2^2×λ^2/4π, the corresponding path loss in dB due to scattering is PL=-20 log_10(|a^s|), and the propagation time is τ^s=(r_1+r_2)/c_0; r_1 and r_2 are the distance between the scatterer and RX and TX, respectively and σ_rcs is the bistatic RCS of the scatterer. In this paper, the bistatic RCS values of complex objects (such as cars or pedestrians) are computed by the Physical Optics Gordon method, and regular shape objects (e.g., lampposts) are computed with the closed-form model of the RCS of a conducting cylinder <cit.>. §.§ quasi-deterministic propagation In the quasi-deterministic ray tracing (QD-RT) method, the PDF of a bistatic RCS in (<ref>) is used instead of the exact value of this bistatic RCS resulting in computational complexity decreases drastically. The QD-RT method is a low-complexity technique for statistically analysis and modeling of the channel with Monte-Carlo simulations should be done. A Monte Carlo simulation has variables that should be randomly varied during each iteration. For example for statistical analysis of the path loss due to an irregular object using (<ref>), the distance between the object and the TX and RX antenna denoted by r_1 and r_2 are considered as the Monte-Carlo variable denoted as the independent random variable X_1 and X_2. Therefore, based on (<ref>), the path loss is a random variable denoted as PL(X_1,X_2) ∼ A_0- 40× log_10(X_1+X_2)-10× log_10(σ_rcs) where A_0=-10log_10((4π)^3 ×λ^2) is a constant value. According to (<ref>), using the PDF of the bistatic RCS of objects can generate the same distribution for the path loss as using the exact values of bistatic RCS. To model the bistatic Radar Cross Section (RCS) of an irregular object using a Probability Density Function (PDF), a dataset of bistatic RCS for all incident and scattered angles must be generated. It is important to note that the combination of the bistatic RCS at different angles to generate the dataset of bistatic RCS is not equal and depends on the specific scenario being tested. In the case of a street canyon scenario, the angular dependency of the bistatic RCS in creating the dataset follows a specific equation: f_Θ, Φ(θ,ϕ)= (Δ z)^2sin(θ)/2L_1W_2 cos^3(θ) , if a/Δ z × sin(ϕ)<θ<b/Δ z × sin(ϕ) ϕ_0 < ϕ < π - ϕ_0, 0 ,  otherwise where (θ and ϕ) are elevation and azimuth angles in spherical coordinates. Δ z is the differential height between the object and TX (RX). Here, the elevation angle is limited by the lines Y_w = a and Y_w = b and the azimuth angle is bounded by ϕ_0 = 2a/L_1, see Fig .<ref>. § SIMULATION RESULTS The purpose of this study is to validate the quasi-deterministic ray tracing (QD-RT) method by comparing it with the deterministic ray tracing (D-RT) method for a street canyon scenario. To accomplish this, the pass loss and excess delay time distributions due to a pedestrian (parked cars) located randomly on the sidewalk (along the street) in the street canyon scenario shown in Fig. <ref> with dimensions listed in Table <ref> are numerically calculated using both methods. The PDF of the bistatic RCS for the pedestrians and parked cars are first obtained using the Physical Optics method, with logistic distributions observed for both cases. It is observed that the pedestrian and parked car follow the logistic distributions as listed in Table <ref>. The mean values for a car and a pedestrian are 11 and 6.17 dBsm, respectively. However, the maximum values (corresponding to the specular points) for a car and a pedestrian are around 60 dBsm and 40 dBsm, which yields a considerable difference of approximately 20 dBsm. Monte Carlo simulations are then conducted with a total of 1000 Monte-Carlo simulations with n ∈{1, ..., 10} pedestrians, randomly positioned on the sidewalks is performed with the resulting path loss distributions fitted to Weibull distributions with scale and shape parameters. Excess time delay distributions for both pedestrians and parked cars are also calculated, with lognormal distributions observed. The statistical parameters of the path loss and excess time delay distributions are presented in Fig. <ref> and table <ref>, respectively. This study demonstrates that the QD-RT method offers the same path loss distributions as the D-RT method with lower complexity, making it a promising approach for analyzing complex scenarios such as street canyon scenarios in mmWave wireless communication systems. § CONCLUSION In conclusion, the proposed quasi-deterministic ray tracing method using a statistical bistatic distribution to model the Radar Cross Section of various irregular objects showed promising results in analyzing the propagation of electromagnetic waves in street canyon scenarios. The method provided the same path loss and excess time delay distributions as the deterministic ray tracing model while offering lower complexity. The study also found that the scenario-specific PDF of bistatic RCS of irregular objects followed logistic distributions and the path loss and excess time delay followed Weibull and lognormal distributions, respectively. This study highlights the potential of the QD-RT method for analyzing complicated scenarios, such as street canyon scenarios, in mmWave wireless communication systems. § ACKNOWLEDGMENTS The present work received funding from the European Union’s Framework Programme for Research and Innovation Horizon 2020 under Grant Agreement No. 861222 (MINTS project). IEEEtran
http://arxiv.org/abs/2307.04593v1
20230710143512
DWA: Differential Wavelet Amplifier for Image Super-Resolution
[ "Brian B. Moser", "Stanislav Frolov", "Federico Raue", "Sebastian Palacio", "Andreas Dengel" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Moser et al. German Research Center for Artificial Intelligence (DFKI), Germany RPTU Kaiserslautern-Landau, Germany [email protected] DWA: Differential Wavelet Amplifier for Image Super-Resolution Brian B. Moser1, 2 Stanislav Frolov1,2 Federico Raue1 Sebastian Palacio1 Andreas Dengel1, 2 February 2023 =================================================================================================== This work introduces Differential Wavelet Amplifier (DWA), a drop-in module for wavelet-based image Super-Resolution (SR). DWA invigorates an approach recently receiving less attention, namely Discrete Wavelet Transformation (DWT). DWT enables an efficient image representation for SR and reduces the spatial area of its input by a factor of 4, the overall model size, and computation cost, framing it as an attractive approach for sustainable ML. Our proposed DWA model improves wavelet-based SR models by leveraging the difference between two convolutional filters to refine relevant feature extraction in the wavelet domain, emphasizing local contrasts and suppressing common noise in the input signals. We show its effectiveness by integrating it into existing SR models, e.g., DWSR and MWCNN, and demonstrate a clear improvement in classical SR tasks. Moreover, DWA enables a direct application of DWSR and MWCNN to input image space, reducing the DWT representation channel-wise since it omits traditional DWT. § INTRODUCTION Image Super-Resolution (SR) has an impressive legacy in Computer Vision (CV) yet still presents an exhilarating challenge <cit.>. SR is a task of enhancing Low-Resolution (LR) images to High Resolution (HR). It is challenging because many High Resolution (HR) images can correspond to a given Low-Resolution (LR) image, rendering the task mathematically ill-posed. In recent years, deep learning has fueled rapid development in SR, leading to tremendous progress <cit.>. While many techniques have improved the overall quality of image reconstructions, there remains a pressing need for methods capable of producing high-frequency details, particularly when dealing with high magnification ratios <cit.>. Addressing this issue is crucial for the continued advancement of SR. Influenced by achievements on other CV tasks, recent research focused on trending approaches like Transformer-based networks <cit.>, Denoising Diffusion Probabilistic Models <cit.> or Generative Adversarial Networks <cit.>. Despite astonishing reconstruction capabilities, they often lack an explicit focus on generating high-frequency details, i.e., local variations. This work aims to advance the field of SR by exploring wavelet-based networks. Unfortunately, this technique has received less attention despite its significant potential <cit.>. We seek to provide a fresh perspective and revive research by re-evaluating these approaches. Discrete Wavelet Transformation (DWT) enables an efficient image representation without losing information compared to its naïve spatial representation, i.e., traditional RGB format. It does so by separating high-frequency details in distinct channels and reducing the spatial area of input image representation by a factor of 4. Therefore, a smaller receptive field is required to capture the input during feature extraction. Using DWT like in DWSR <cit.> and MWCNN <cit.> reduces the overall model size and computational costs while performing similarly to state-of-the-art image SR architectures. This work introduces a new Differential Wavelet Amplifier (DWA) module inspired by differential amplifiers from electrical engineering <cit.>. Differential amplifiers increase the difference between two input signals and suppress the common voltage shared by the two inputs, called Common Mode Rejection (CMR) <cit.>. In other words, it mitigates the impact of noise (e.g., electromagnetic interference, vibrations, or thermal noise) affecting both source inputs while retaining valuable information and improving the integrity of the measured input signal. Our proposed DWA layer adapts this idea to deep learning and can be used as a drop-in module to existing SR models. This work shows its effectiveness as exemplary for wavelet-based SR approaches. DWA leverages the difference between two convolutional filters with a stride difference to enhance relevant feature extraction in the wavelet domain, emphasizing local contrasts and suppressing common noise in the input signals. We demonstrate the effectiveness of DWA through extensive experiments and evaluations, showing improved performance compared to existing wavelet-based SR models without DWA: DWSR with DWA shows overall better performance w.r.t. PSNR and SSIM, and MWCNN with DWA achieves better SSIM scores with comparable PSNR values on the testing datasets Set5 <cit.>, Set14 <cit.>, and BSDS100 <cit.>. Taken together, our work makes the following key contributions: * Introduction of Differential Wavelet Amplifier (DWA): a novel module that leverages the difference between two convolutional filters horizontally and vertically in a wavelet-based image representation, which is applicable as drop-in addition in existing network architectures. * Comprehensive evaluation demonstrating the improved performance by using DWA on popular SR datasets such as Set5 <cit.>, Set14 <cit.>, and BSDS100 <cit.> by adding DWA to existing wavelet-based SR models, namely, DWSR <cit.> and MWCNN <cit.>. * Experimental analysis showing that DWA enables a direct application of DWSR and MWCNN to the input space by avoiding the DWT on the input image. This application reduces the input channel-wise to 3 instead of 12 channels for RGB images while keeping the spatial reduction benefit of DWT. * Visual examination of reconstructions showcasing that the DWSR with the DWA module captures better distinct edges and finer details, which are also closer to the ground truth residuals. § BACKGROUND This chapter provides comprehensive background information on 2D Discrete Wavelet Transform (2D-DWT), how SR models (DWSR <cit.> and MWCNN <cit.>) use it, and related work to Differential Wavelet Amplifiers (DWA). Additionally, we introduce differential amplifiers from electrical engineering, which inspired our proposed method DWA. §.§ Discrete Wavelet Transform in SR The 2D Discrete Wavelet Transform (2D-DWT) decomposes an image into four unique sub-bands with distinct frequency components: a low-frequency approximation sub-band and three high-frequency detail sub-bands representing horizontal, vertical, and diagonal details. Let x [ n ] ∈ℝ^N be a signal. The 1D Discrete Wavelet Transformation (1D-DWT) with Haar wavelet passes the input signal first through a half-band high-filter h [ n ] and a low-pass filter l [ n ]. Next, half of the sample is eliminated according to the Nyquist rule <cit.>. The wavelet coefficients are calculated by repeating the decomposition to each output coefficient iteratively <cit.>. In the case of images, it applies h [ n ] and l [ n ] in different combinations, resulting in four function applications. The DWSR <cit.> SR model exploits the wavelet domain and gets the DWT representation of the interpolated LR image as input. DWSR is composed of 10 convolution layers that are applied sequentially. It adds the interpolated LR input as residual for the final reconstruction step, which results in learning only the sparse residual information between the LR and HR domains. MWCNN <cit.> exploits multi-level DWT (multiple applications of DWT) and utilizes a U-Net architecture <cit.>. DWT replaces all downsizing steps, and the inverse operation of DWT replaces all upsampling steps. Ultimately, it uses the interpolated LR image as a residual connection for the final prediction. The standard MWCNN setup consists of 24 convolution layers. One caveat of DWSR and MWCNN in learning the residual is that they must translate its rich information input to sparse representation, e.g., the average band. To ease the burden, we present a Differential Wavelet Amplifier, which directly transforms the input into sparse representations inspired by differential amplifiers introduced next. §.§ Differential Amplifier An electronic amplifier is a standard electrical engineering device to increase a signal's power <cit.>. One type of electronic amplifier is the differential amplifier that increases the difference between two input signals and suppresses the common voltage shared by the two inputs <cit.>. Given two inputs V^-_in, V^+_in∈ℝ^N and the differential gain of the amplifier A_d ∈ℝ, the output V_out is calculated as V_out = A_d ( V^+_in - V^-_in) The purpose of differential amplifiers is to suppress common signals or noise sources that are present in multiple input channels while retaining valuable information. In the literature, this is called Common Mode Rejection (CMR) and is a critical property in many electrical engineering applications, particularly in systems that measure small signals in the presence of noise or interference, e.g., electromagnetic interference or thermal noise <cit.>. Hence, using CMR improves the signal-to-noise ratio, overall system performance, and signal integrity since the system can focus on the relevant differential signals. §.§ Differential Convolutions Closest to our work is Sarıgül et al. <cit.>, which applies differential convolutions, i.e., the difference of two convolution layers, to emphasize contrasts for image classification, which is inherently different to image generation tasks such as image SR. Despite this, they do not consider a stride difference vital for capturing variations. Knutsson et al. <cit.> theoretically examine a normalized version of differential convolutions also with no stride difference. Due to the time of publication, they did not try it in the case of deep learning-based image SR. Newer applications like Canh et al. <cit.> consider learnable parameters to turn the Difference of Gaussians (DoG) <cit.> into a learnable framework, but has the same caveat: As Knutsson concluded, their approaches can be interpreted as a standard convolution weighted with the local energy minus the “mean” operator acting on the “mean” data, i.e., a more elaborate convolution operation. A similarity could also be seen in the approach of residual connections of ResNets <cit.> when the kernel parameters have a negative sign. However, residual connections are different since they force a convolution layer to learn to extract the sparse details that are not apparent in the input. In contrast, our proposed method with Differential Wavelet Amplifier (DWA) explicitly produces sparse details by design due to the subtraction operator. Therefore, DWA does not have to learn what input information should be removed for the residual information. It can focus on relevant features that persist when the stride convolution does not detect the same feature, thereby emphasizing local contrast. § DIFFERENTIAL WAVELET AMPLIFIER (DWA) This section presents our proposed Differential Wavelet Amplifier (DWA) module. Inspired by differential amplifiers in electrical engineering, DWA is designed to operate in the wavelet domain and exploits the difference between two input signals to improve the performance of image SR methods based on wavelet predictions. DWA is applied separately in the horizontal and vertical axis of the input image. In each direction, we perform two convolutions with a stride distance in one direction for both axis (from left to right, from top to bottom, as in MDLSTMs <cit.>), allowing a fine-grained feature extraction and emphasizing local contrasts while suppressing the common mode in the input, similar to CMR in electrical engineering. <ref> visualizes all processes involved in DWA. Let 𝐱∈ℝ^w × h × c_in be an input image or feature map with c_in channels. We define ψ(𝐱, (i, j) ) : ℝ^w × h × c_in×ℕ^2 →ℝ^k · k × c_in as a function that extracts k · k points around a spatial position (i, j). We can then express the resulting feature maps for the horizontal 𝐇( 𝐱) and vertical 𝐕( 𝐱) axis as 𝐇( 𝐱)_i,j = f ( ψ(𝐱, (i, j) ) ; θ_1 ) - f ( ψ(𝐱, (i+s, j) ) ; θ_2 ), 𝐕( 𝐱)_i,j = f ( ψ(𝐱, (i, j) ) ; θ_3 ) - f ( ψ(𝐱, (i, j+s) ) ; θ_4), where f : ℝ^k · k × c_in→ℝ^c_f is a convolution operation with parameters θ_n for 0 < n < 4 , k × k the kernel size and s ∈ℕ a pre-defined stride difference. As a result, the local variance is captured in one direction for both axes, similar to MDLSTMs <cit.>: from left to right with parameters θ_1 and θ_2 and from top to bottom with parameters θ_3 and θ_4. We obtain two distinct feature maps that capture complementary input image information and provide richer feature representations for the wavelet-based SR task. The input is directly translated to sparse representations, which reduces the distance to residual target objectives in networks that use residual connections for final prediction. We concatenate the resulting feature maps alongside the input to ensure no information is lost during the DWA processing. This combination creates a comprehensive set of feature maps that retains the original input information while incorporating the directional features obtained from both axes. More formally: g ( 𝐱) = 𝐱⊙σ( H ( 𝐱) ⊙ V ( 𝐱) ), where ⊙ is a channel-wise concatenation operator and σ is a non-linear function like sigmoid, tanh or ReLU <cit.>. The concatenated feature map is fed into an additional convolution layer f_final: ℝ^k · k × (c_in + 2 · c_f)→ℝ^c_final and parameters θ_final, which maps the channel size after concatenation to a desired target channel size c_final such that our module can easily be incorporated into existing models: DWA( 𝐱)_i,j = f_final( ψ(g (𝐱), (i, j) ) ; θ_final) A SR model utilizing this DWA module exploits the comprehensive feature map to learn the complex relationships between LR and HR images, ultimately reconstructing the HR image with reduced noise. By employing the DWA, we aim to harness the benefits of wavelet domain processing and the difference between two convolutional filters. We demonstrate the effectiveness of our approach through extensive experiments and evaluations in the following sections. §.§ Direct Application of DWA (DWA Direct) One way to circumvent additional computation steps is to apply DWA directly on the image space, omitting DWT and learning the transition between image and frequency space implicitly via DWA. Thus, the interpolation of the input, which effectively adds no additional information since it generates only approximated values, can be reduced by half for networks like DWSR or MWCNN. Consequently, the network is better adapted to the given values of the LR input. In the experiments, we evaluate this alternative approach called DWA Direct and show that it further enhances the performances of DWSR and MWCNN. § EXPERIMENTS We evaluate our proposed DWA module by integrating it into the wavelet-based SR models DWSR and MWCNN. We begin this section by describing the experiments. Next, we discuss the results quantitatively and qualitatively. We show the effectiveness of DWA and that a direct application of wavelet-based SR models with DWA to image space is feasible without forfeiting reconstruction quality. §.§ Experimental Setup We applied widely-used SR datasets to evaluate our method. In addition, we utilized standard augmentation techniques such as rotation, horizontal and vertical flipping. For testing, we employed the datasets Set5 <cit.>, Set14 <cit.>, BSDS100 <cit.>. For training, we used different settings for DWSR and MWCNN to match the original works for a fair comparison, as dissected in the following. In all experiments, we train using the Adam optimizer <cit.> with a learning rate of 10^-4 with L2 regularization of 10^-8 on a single A100 GPU. Moreover, we use a learning rate decay schedule, which reduces the learning rate by 20 % every 20 epochs. Ablation Study: We use DIV2K <cit.> and follow the standard procedure by extracting sub-images of 192×192 for training. We iterate for 40 epochs over the training dataset. Since we compare with DWSR, we use L1-loss as the learning objective, as reported by the authors of DWSR. DWSR-Scenario: We use DIV2K <cit.> like in the ablation study, but we train for 100 epochs as reported in DWSR. MWCNN-Scenario: We collect 800 images from DIV2K <cit.>, 200 images from BSD <cit.> and 4,744 images from WED <cit.> and train for 100 epochs. Contrary to DWSR, we adapt the L2-loss like the authors of MWCNN. For sub-image extraction, we use a size of 240×240 to match the training settings of MWCNN. § RESULTS This section presents the quantitative and qualitative analysis of this work. It shows that incorporating the DWA module into DWSR improves the performance in every dataset and for all scaling factors. Moreover, we consistently improve the SSIM scores by implementing DWA into MWCNN and achieve similar PSNR results. This section starts with an ablation study to investigate different striding settings and the effect of combining DWA with DWSR for the direct application and the regular DWT case (see <ref>). Next, we examine the performance scores of our DWA module on classical SR datasets with DWSR and MWCNN. Finally, we visually compare the quality of the reconstructions. §.§.§ Ablation Study <ref> shows the impact of different striding settings for DWSR with DWA with 2x and 4x scaling. We observe an improvement for striding settings greater than 0, significantly for PSNR and slightly for SSIM. The differences between striding settings greater than 0 are minimal, with a slight decrease for larger striding sizes. Nonetheless, they outperform DWA with no stride difference consistently. Thus, having a stride difference to capture local variations more effectively benefits the overall performance of DWSR. We further investigate the impact of various model configurations, DWSR with or without the DWA module, in a direct application or without (see <ref>). <ref> presents the results, where two graphs display the PSNR and SSIM values <cit.>, respectively, for each method. We apply the ablation study with different model depths, ranging from 6 to 18, instead of using a standard depth of 10 for DWSR. As a result, DWSR with DWA or DWA Direct consistently outperforms the DWSR baseline across all model depths. This demonstrates the effectiveness of incorporating the DWA module as the first layer in the DWSR framework. Moreover, DWA Direct outperforms DWA applied to the DWT on the input with greater model depths. Furthermore, we observe a considerable performance drop in DWSR Direct without using the DWA module compared to all other evaluated methods. This indicates that the DWA module is crucial in enabling the Direct approach, as its absence significantly degrades performance. §.§.§ Performance <ref> summarizes PSNR and SSIM scores when applying the DWA module to DWSR and MWCNN for classical SR datasets on different scaling factors for a longer training span. We observe that incorporating the DWA module into DWSR improves the performance in every dataset and for all scaling factors. For MWCNN with DWA, a similar observation can be made, especially for the SSIM scores, which show overall the best performances. However, it has slightly decreased PSNR values for some cases, e.g., for scaling factor 3. Both applications, DWSR with DWA and MWCNN with DWA, are applied directly on the input image space, omitting a DWT of the input. §.§.§ Visual Comparison <ref> displays the ground truth HR image alongside the DWSR and DWA reconstructions. DWSR and DWA perform reasonably well in reconstructing the images. However, the DWA reconstructions exhibit more accurate and sharp details, particularly in the zoomed-in regions. Since the added bicubic interpolation of the LR image in the reconstruction process provides a robust base prediction, we also present the residual images, which are the differences between the bicubic interpolations and the ground truth images, to highlight the performance difference between both approaches. These residual images are the learning targets of the models to improve the reconstruction quality beyond interpolation. By comparing the residual images, we can see more clearly that the DWA model captures better distinct edges and finer details, which are also closer to the ground truth residuals, as opposed to the DWSR model. It has more substantial edges and finer points in the residual images, which are also closer in color (see red colored lines of DWSR reconstruction in <ref> as a comparison). This observation aligns with our quantitative results, where DWA outperforms DWSR regarding various performance metrics. To provide deeper insights into our proposed models, <ref> presents feature maps generated by the DWSR and DWA Direct models after the first layer. To ensure diversity, we selected the top five channels from each method based on the highest sum of distances between pairwise differences of all channels. Our analysis reveals that although DWSR operates on the frequency space, it still remains similar to the LR input and fails to capture the desired target residual. In contrast, DWA Direct extracts local contrasts and variations more effectively from the image space and performs better in mapping the target residual. § CONCLUSION AND FUTURE WORK In this work, we presented a novel Differential Wavelet Amplifier (DWA) module, which can be used as a drop-in module to existing wavelet-based SR models. We showed experimentally on Set5, Set14, and BSDS100 for scaling factors 2, 3, and 4 that it improves the reconstruction quality of the SR models DWSR and MWCNN while enabling an application of them to the input image space directly without harm to performance. This module captures more distinct edges and finer details, which are closer to the ground truth residuals, which wavelet-based SR models usually learn. This work is an opportunity to seek further advancements for SR based on frequency-based representations. For future work, an exciting research avenue would be to explore ways to incorporate DWA on different DWT levels in MWCNN instead of only applying it initially. § ACKNOWLEDGMENTS This work was supported by the BMBF projects SustainML (Grant 101070408) and by Carl Zeiss Foundation through the Sustainable Embedded AI project (P2021-02-009). splncs04
http://arxiv.org/abs/2307.10829v2
20230710121818
Exact Diffusion Inversion via Bi-directional Integration Approximation
[ "Guoqiang Zhang", "J. P. Lewis", "W. Bastiaan Kleijn" ]
cs.CV
[ "cs.CV" ]
[ Michael Liut August 12, 2023 =================== Recently, different methods have been proposed to address the inconsistency issue of DDIM inversion to enable image editing, such as EDICT <cit.> and Null-text inversion <cit.>. However, the above methods introduce considerable computational overhead. In this paper, we propose a new technique, named bi-directional integration approximation (BDIA), to perform exact diffusion inversion with neglible computational overhead. Suppose we would like to estimate the next diffusion state z_i-1 at timestep t_i with the historical information (i,z_i) and (i+1,z_i+1). We first obtain the estimated Gaussian noise ϵ̂(z_i,i), and then apply the DDIM update procedure twice for approximating the ODE integration over the next time-slot [t_i, t_i-1] in the forward manner and the previous time-slot [t_i, t_t+1] in the backward manner. The DDIM step for the previous time-slot is used to refine the integration approximation made earlier when computing z_i. One nice property with BDIA-DDIM is that the update expression for z_i-1 is a linear combination of (z_i+1, z_i, ϵ̂(z_i,i)). This allows for exact backward computation of z_i+1 given (z_i, z_i-1), thus leading to exact diffusion inversion. Interestingly, the update expression for z_i-1 is in fact time-symmetric in that switching the timestep t_i-1 and t_i+1 produces the inverse update expression for z_i+1 in terms of (z_i,z_i-1). Experiments on both image reconstruction and image editing were conducted, confirming our statement. BDIA can also be applied to improve the performance of other ODE solvers in addition to DDIM. In our work, it is found that applying BDIA to the EDM sampling procedure produces slightly better FID score over CIFAR10. § INTRODUCTION As one type of generative models, diffusion probabilistic models (DPMs) have made significant progress in recent years. The pioneering work <cit.> applied non-equilibrium statistical physics to estimating probabilistic data distributions. In doing so, a Markov forward diffusion process is constructed by systematically inserting additive noise into a data sample until the data distribution is almost destroyed. The data distribution is then gradually restored by a reverse diffusion process starting from a simple parametric distribution. The main advantage of DPM over classic tractable models (e.g., HMMs, GMMs, see <cit.>) is that DPM can accurately model both the high and low likelihood regions of the data distribution by estimating a sequence of progressively less noise-perturbed data distributions. In comparison to generative adversarial networks (GANs) <cit.>, DPMs exhibit more stable training dynamics by avoiding adversarial learning, as well as showing better sample diversity. Following the work of <cit.>, various learning and/or sampling strategies have been proposed to improve the performance of DPMs, which include, for example, denoising diffusion probabilistic models (DDPMs) <cit.>, denoising diffusion implicit models (DDIMs) <cit.>, improved DDIMs <cit.>, latent diffusion models (LDMs)<cit.>, score matching with Langevin dynamics (SMLD) <cit.>, analytic-DPMs <cit.>, optimized denoising schedules <cit.>, guided diffusion strategies <cit.>, and classifier-free guided diffusion <cit.>. It is worth noting that DDIM can be interpreted as a first-order ODE solver. As an extension of DDIM, various high-order ODE solvers have been proposed, such as EDM <cit.>, DEIS <cit.>, PNDM <cit.>, DPM-Solvers <cit.>, and IIA-EDM and IIA-DDIM <cit.>. In recent years, image-editing via diffusion models has attracted increasing attention in both academia and industry. One important operation for editing a real image is to first perform forward process on the image to obtain the final noise representation and then perform a backward process with embedded editing to generate the desired image <cit.>. DDIM inversion has been widely used to perform the above forward and backward processes <cit.>. A major issue with DDIM inversion is that the intermediate diffusion states in the forward and backward processes may be inconsistent due to the inherent approximations (see Subsection <ref>). This issue becomes significant when utilizing classifier-free guided technique in text-to-image editing <cit.>. The newly generated images are often perceptually far away from the original ones, which is undesirable for image-editing. Recently, two methods have been proposed to address the inconsistency issue of DDIM inversion. Specifically, the work of <cit.> proposed a technique named null-text inversion to push the diffusion states of the backward process to be optimally close to those of the forward process via iterative optimization. The null-text inputs to the score neural network are treated as free variables in the optimization procedure. In <cit.>, the authors proposed the EDICT technique to enforce exact DDIM inversion. Their basic idea is to introduce an auxiliary diffusion state and then perform alternating updates on the primal and auxiliary diffusion states, which is inspired by the flow generative framework <cit.>. One drawback of EDICT is that the number of neural functional evaluations (NFEs) has to be doubled in comparison to DDIM inversion (See Subsection <ref>). Another related line of research work is DDPM inversion (see <cit.>). In this paper, we propose a new technique to enforce exact DDIM inversion with negligable computational overhead, reducing the number of NFEs required in EDICT by half. Suppose we are in a position to estimate the next diffusion state z_i-1 at timestep t_i by utilizing the two most recent states z_i and z_i+1. With the estimated Gaussian noise ϵ̂(z_i,i), we perform the DDIM update procedure twice for approximating the ODE integration over the next time-slot [t_i, t_i-1] in the forward manner and the previous time-slot [t_i,t_i+1] in the backward manner. The DDIM for the previous time-slot is employed to refine the integration approximation made earlier when computing z_i. As a result, the expression for z_i-1 becomes a linear combination of (z_i+1, z_i,ϵ̂(z_i,i)), and naturally facilitates exact diffusion inversion. We refer to the above technique as bi-directional integration approximation (BDIA). We emphasize that the obtained update expression for z_i-1 under BDIA-DDIM is time-symmetric in that switching the timestep t_i-1 and t_i+1 inverts the diffusion directions (see Section <ref> for a discussion on relevant literature). Experiments demonstrate that BDIA-DDIM produces satisfactory results on both image reconstruction and image editing. We have also applied BDIA to EDM, and found that the image qualities are also improved slightly. § PRELIMINARY Forward and reverse diffusion processes: Suppose the data sample x∈ℝ^d follows a data distribution p_data(x) with a bounded variance. A forward diffusion process progressively adds Gaussian noise to the data samples x to obtain z_t as t increases from 0 until T. The conditional distribution of z_t given x can be represented as q_t|0(z_t|x) = 𝒩(z_t|α_tx, σ_t^2I) z_t = α_tx+σ_t ϵ, where α_t and σ_t are assumed to be differentiable functions of t with bounded derivatives. We use q(z_t; α_t,σ_t) to denote the marginal distribution of z_t. The samples of the distribution q(z_T;α_T,σ_T) should be practically indistinguishable from pure Gaussian noise if σ_T ≫α_T. The reverse process of a diffusion model firstly draws a sample z_T from 𝒩(0, σ_T^2I), and then progressively denoises it to obtain a sequence of diffusion states {z_t_i∼ p(z;α_t_i,σ_t_i)}_i=0^N, where we use the notation p(·) to indicate that reverse sample distribution might not be identical to the forward distribution q(·) because of practical approximations. It is expected that the final sample z_t_0 is roughly distributed according to p_data(x), i.e., p_data(x)≈ p(z_t_0;α_t_0,σ_t_0) where t_0=0. ODE formulation: In <cit.>, Song et al. present a so-called probability flow ODE which shares the same marginal distributions as z_t in (<ref>). Specifically, with the formulation (<ref>) for a forward diffusion process, its reverse ODE form can be represented as dz = [f(t)z_t-1/2g^2(t)∇_zlog q(z_t; α_t,σ_t)]_d(z_t, t)dt, where d(z_t,t) denotes the gradient vector at time t, and the two functions f(t) and g(t) are represented in terms of (α_t, σ_t) as f(t) = dlogα_t/dt, g^2(t)=dσ_t^2/dt-2dlogα_t/dtσ_t^2. ∇_zlog q(z;α_t,σ_t) in (<ref>) is the score function <cit.> pointing towards higher density of data samples at the given noise level (α_t,σ_t). One nice property of the score function is that it does not depend on the generally intractable normalization constant of the underlying density function q(z;α_t,σ_t). As t increases, the probability flow ODE (<ref>) continuously reduces the noise level of the data samples in the reverse process. In the ideal scenario where no approximations are introduced in (<ref>), the sample distribution p(z;α_t,σ_t) approaches p_data(x) as t goes from T to 0. As a result, the sampling process of a diffusion model boils down to solving the ODE form (<ref>), where randomness is only introduced in the initial sample at time T. This has opened up the research opportunity of exploiting different ODE solvers in diffusion-based sampling processes. Denoising score matching: To be able to utilize (<ref>) for sampling, one needs to specify a particular form of the score function ∇_zlog q(z;α_t,σ_t). One common approach is to train a noise estimator ϵ̂_θ by minimizing the expected L_2 error for samples drawn from q_data (see <cit.>): 𝔼_x∼ p_data𝔼_ϵ∼𝒩(0, σ_t^2I)ϵ̂_θ(α_t x+σ_tϵ,t)-ϵ_2^2, where (α_t, σ_t) are from the forward process (<ref>). The common practice in diffusion models is to utilize a neural network of U-Net architecture <cit.> to represent the noise estimator ϵ̂_θ. With (<ref>), the score function can then be represented in terms of ϵ̂_θ(z_t; t) as (see also (229) of <cit.>) ∇_zlog q(z_t;α_t,σ_t) =-(z_t-α_t x)/σ_t^2 = -ϵ̂_θ(z_t; t)/σ_t. Alternatively, the score function can be represented in terms of an estimator for x (see <cit.>). The functional form for the noise level (α_t,σ_t) also plays an important role in the sampling quality in practice. For example, the setup (α_t,σ_t)=(1,√(t)) was studied in <cit.>, which corresponds to constant-speed heat diffusion. The recent work <cit.> found that a simple form of (α_t,σ_t)=(1,t) works well in practice. § BI-DIRECTIONAL INTEGRATION APPROXIMATION (BDIA) FOR DDIM In this section, we first review DDIM inversion and EDICT as an extension of DDIM inversion. We then present our BDIA technique to enable exact diffusion inversion. §.§ Review of DDIM inversion We first consider the update expression of DDIM for sampling, which is in fact a first-order solver for the ODE formulation (<ref>)-(<ref>) (see <cit.>), given by z_i-1= α_i-1(z_i -σ_iϵ̂_θ(z_i, i) /α_i)+σ_i-1ϵ̂_θ(z_i, i) = a_i z_i +b_iϵ̂_θ(z_i, i) ≈ z_i+∫_t_i^t_i-1d(z_τ,τ)dτ, where a_i=α_i-1/α_i and b_i=σ_i-1-σ_iα_i-1/α_i. It is clear from (<ref>)-(<ref>) that the integration ∫_t_i^t_i-1d(z_τ,τ)dτ is approximated by the forward DDIM update. That is, only the diffusion state z_i at the starting timestep t_i is used in the integration approximation. To perform DDIM inversion, z_i can be approximated in terms of z_i-1 as z_i =α_i(z_i-1-σ_i-1ϵ̂_θ(z_i,i)/α_i-1)+σ_iϵ̂_θ(z_i,i) ≈α_i(z_i-1-σ_i-1ϵ̂_θ(z_i-1,i)/α_i-1)+σ_iϵ̂_θ(z_i-1,i), where z_i in the RHS of (<ref>) is replaced with z_i-1 to facilitate explicit computation. This naturally introduces approximation errors, leading to inconsistency of the diffusion states between the forward and backward processes. §.§ Review of EDICT for exact diffusion inversion Inspired by the flow generative framework <cit.>, the recent work <cit.> proposed EDICT to enforce exact diffusion inversion. The basic idea is to introduce an auxiliary diffusion state y_i to be coupled with z_i at every timestep i. The next pair of diffusion states (z_i-1, y_i-1) is then computed in an alternating fashion as z_i^inter = a_iz_i + b_iϵ_θ(y_i,i) y_i^inter = a_iy_i + b_iϵ_θ(z_i^inter,i) z_i-1 = pz_i^inter+(1-p)y_i^inter y_i-1 = py_i^inter+(1-p)z_i-1, where p∈ [0,1] is the weighting factor in the mixing operations and the pair (z_i^inter, y_i^inter) represents the intermediate diffusion states. According to <cit.>, the two mixing operations (<ref>)-(<ref>) are introduced to make the update procedure stable. Due to the alternating update formalism in (<ref>)-(<ref>), the computation can be inverted to obtain (z_i, y_i) in terms of (z_i-1, y_i-1) as y_i^inter = (y_i-1-(1-p)z_i-1)/p z_i^inter = (z_i-1-(1-p)y_i^inter)/p y_i = (y_i^inter - b_iϵ_θ(z_i^inter,i))/a_i x_i = (z_i^inter - b_iϵ_θ(y_i,i)/a_i Unlike (<ref>)-(<ref>), the inversion of (<ref>)-(<ref>) does not involve any approximation, thus enabling exact diffusion inversion. Finally, it is clear from the above equations that the NFE that EDICT has to perform is two times the NFE required for DDIM. This makes the method computationally expensive in practice. It is highly desirable to reduce the NFE in EDICT while retaining exact diffusion inversion. We provide such a method in the next subsection. §.§ BDIA-DDIM for exact diffusion inversion Reformulation of DDIM update expression: In this section, we present our new technique BDIA to assist DDIM in achieving exact diffusion inversion. To do so, we first reformulate the update expression for z_i-1 in (<ref>) in terms of all the historical diffusion states {z_j}_j=N^i as z_i-1 =z_N+∑_j=N^iΔ(t_j→ t_j-1|z_j) ≈z_N+∑_j=N^i∫_t_j^t_j-1d(z_τ, τ)dτ , where we use Δ(t_j→ t_j-1|z_j) to denote approximation of the integration ∫_t_j^t_j-1d(z_τ,τ)dτ via the forward DDIM step, given by Δ(t_j→ t_j-1|z_j) =z_j-1 - z_j =a_jz_j + b_jϵ̂_θ(z_j,j)-z_j. Replacing forward DDIM by backward DDIM: We argue that, in principle, the integration ∫_t_j^t_j-1d(z_τ,τ)dτ in (<ref>) can be alternatively approximated by the backward DDIM update, expressed as ∫_t_j^t_j-1d(z_τ,τ)dτ≈ - Δ(t_j-1→ t_j|z_j-1), where the notation Δ(t_j-1→ t_j|z_j-1) denotes the backward DDIM step from t_j-1 to t_j. The minus sign in front of Δ(t_j-1→ t_j|z_j-1) is due to integration over reverse time. The update expression for the backward DDIM step can be represented as Δ(t_j-1→ t_j|z_j-1) =z_j - z_j-1 =α_j(z_j-1-σ_j-1ϵ̂_θ(z_j-1, j-1) /α_j-1)+σ_jϵ̂_θ(z_j-1, j-1) -z_j-1 =z_j-1/a_j - b_j/a_jϵ̂_θ(z_j-1,j-1) - z_j-1. It is noted that in practice, we first need to perform a forward DDIM step over [t_j,t_j-1] to obtain z_j-1, and then we are able to perform the backward DDIM step computing Δ(t_j-1→ t_j|z_j-1). Bi-directional integration approximation (BDIA): We now present our new BDIA technique. Our primary goal is to develop an update expression for each z_i-1 as a linear combination of (z_i+1, z_i,ϵ̂_θ(z_i,i)). As will be explained in the following, the summation of the integrations ∑_j=N^i∫_t_j^t_j-1d(z_τ,τ)dτ for z_i-1 will involve both forward DDIM updates and backward DDIM updates. Suppose we are at the initial time step t_N with state z_N. Then the next state z_N-1 is computed by applying the forward DDIM (see (<ref>)): z_N-1 = a_Nz_N +b_Nϵ̂_θ(z_N, N) =z_N + Δ(t_N→ t_N-1|z_N). Upon obtaining z_N-1, we are able to compute Δ(t_N-1→ t_N|z_N-1) over the previous time-slot [t_N-1, t_N] and Δ(t_N-1→ t_N-2|z_N-1) over the next time-slot [t_N-1, t_N-2]. Consequently, the integration ∫_t_N^t_N-1d(z_τ,τ)dτ can be approximated by -Δ(t_N-1→ t_N|z_N-1). We define the update for z_i-1 for i≤ N-1 as below: When i≤ N-1, let the diffusion state z_i-1 be computed in terms of (z_i, z_i+1) as z_i-1 = z_i+1 + [a_iz_i+ b_iϵ̂_θ(z_i, i)]-(z_i/a_i+1-b_i+1/a_i+1ϵ̂_θ(z_i,i)) =z_i+1-Δ(t_i→ t_i+1|z_i) + Δ(t_i→ t_i-1|z_i). We can conclude from (<ref>) that in the computation of each z_i-1, the integration for the most recent time-slot [t_i, t_i-1] is approximated by a forward DDIM update, and the integration for the second most recent time-slot [t_i+1, t_i] is approximated by a backward DDIM update. Fig. <ref> demonstrates how the entire integration ∫_t_N^t_i-1d(z_τ,τ)dτ for different z_i-1 is approximated. It can be seen from the figure that the directions of the integration approximation for neighbouring time-slots are always opposite. In other words, the forward and backward DDIM updates are interlaced over the set of time-slots {(t_j, t_j-1)}_j=N^i for each z_i-1. We summarize the results in a proposition below: Let z_N-1 and {z_i| i≤ N-2} be computed by following (<ref>) and (<ref>) sequentially. Then for each timestep i≤ N-2, z_i can be represented in the form of z_i = z_N + Δ(t_N-t_N-1|z_N)mod(N-j, 2) + ∑_j=i+1^N-1(-Δ(t_j→ t_j+1|z_j)+Δ(t_j→ t_j-1|z_j))mod(j-i,2). BDIA-DDIM inversion: Whereas the conventional DDIM inversion (<ref>) requires the approximation z_i-1≈z_i, which is only true in the limit of infinite steps, the formulation (<ref>) allows exact inversion (up to floating point error). Note that (<ref>) is symmetric in time: switching the timestep t_i+1 and t_i-1 in (<ref>) inverts the diffusion direction. That is, it follows from (<ref>) that the diffusion state z_i+1 can be computed in terms of (z_i, z_i-1) as z_i+1 = z_i-1 + Δ(t_i→ t_i+1|z_i) - Δ(t_i→ t_i-1|z_i) = z_i-1 - [a_iz_i+ b_iϵ̂_θ(z_i, i)]+ (z_i/a_i+1-b_i+1/a_i+1ϵ̂_θ(z_i,i)). We summarize the above property of time-symmetry in a lemma below: Switching the timestep t_i-1 and t_i+1 in (<ref>) produces the reverse update (<ref>), and vice versa. Finally, similarly to the computation (<ref>), EDICT also does not involve any approximation and results in exact diffusion inversion. However, in contrast to EDICT, (<ref>) does not require a doubling of the NFE. § RELATED WORKS In the literature, there is a branch of research on development of time-reversible ODE solvers. For instance, Verlet integration was a time-reversible method for solving 2nd-order ODEs <cit.>. Leapfrog integration is another time-reversible method also developed for solving 2nd-order ODEs <cit.>. § EXPERIMENTS We conducted two types of experiments: (1) evaluation of image sampling for both BDIA-DDIM and BDIA-EDM; (2) image-editing via BDIA-DDIM. It was found that our new technique BDIA produces promising results for both tasks. §.§ Evaluation of image sampling In the first experiment, we consider the task of image sampling. The tested pre-trained models can be found in Appendix <ref>. Given a pre-trained model, 50K artificial images were generated for a particular NFE, and the corresponding FID score was computed. Table <ref> and <ref> summarize the computed FID scores. It is clear that by incorporating BDIA into both DDIM and EDM, the FID scores are improved. This can be explained by the fact that BDIA introduces the additional backward integration approximation per time-step in the sampling process. This makes the resulting final integration approximation become more accurate. §.§ Evaluation of image-editing In this second experiment, we evaluated BDIA-DDIM for image-editing by utilizing the open-source repository of EDICT[<https://github.com/salesforce/EDICT>]. Fig. <ref> visualizes the obtained results. We point out that BDIA-DDIM produces very similar results to EDICT while reducing by approximately half the NFE compared to EDICT. § CONCLUSIONS In this paper, we have proposed a new technique BDIA, to assist DDIM in achieving exact diffusion inversion. The key step of BDIA-DDIM is to perform DDIM update procedure twice at each time step t_i: one over the previous time-slot [t_i, t_i+1] and the other over next time-slot [t_i,t_i-1] in computing z_i-1. By doing so, the expression for z_i-1 becomes a linear combination of (z_i, ϵ̂_θ(z_i,i), z_i+1) that is symmetric in time. As a result, z_i+1 can be computed exactly as a linear function of (z_i, ϵ̂_θ(z_i,i), z_i-1), enabling exact diffusion inversion. Note that although the DDIM update is evaluated twice at each step, this is inexpensive since the costly neural functional evaluation is performed only once. 10 Arjovsky17WGAN M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. arXiv:1701.07875 [stat.ML], 2017. Bao22DPM_cov F. Bao, C. Li, J. Sun, J. Zhu, and B. Zhang. Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models. In ICML, 2022. Bao22DPM F. Bao, C. Li, J. Zhu, and B. Zhang. Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models. In ICLR, 2022. Bishop06 C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. Chen20WaveGrad N. Chen, Y. Zhang, H. Zen, R. J. Weiss, M. Norouzi, and W. Chan. WaveGrad: Estimating Gradients for Waveform Generation. arXiv:2009.00713, September 2020. Dhariwal21DPM P. Dhariwal and A. Nichol. Diffusion models beat gans on image synthesis. arXiv:2105.05233 [cs.LG], 2021. Dinh14Nice L. Dinh, D. Krueger, and Y. Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. Dinh16DensityEsti L. Dinh, J. Sohl-Dickstein, and S. Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. Goodfellow14GAN I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative Adversarial Nets. In Proceedings of the International Conference on Neural Information Processing Systems, pages 2672–2680, 2014. Gulrajani17WGANGP I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767–5777, 2017. Ho20DDPM J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. In NeurIPS, 2020. Ho22ClassiferFreeGuide J. Ho and T. Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. Huberman23DDPMInversion I. Huberman-Spiegelglas, V. Kulikov, and T. Michaeli. An Edit Friendly DDPM Noise Space: Inversion and Manipulations. arXiv:2304.06140v2 [cs.CV], 2023. Hyvarinen05ScoreMatching A. Hyvarinen. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 24:695–709, 2005. Karras22EDM T. Karras, M. Aittala, T. Alia, and S. Laine. Elucidating the Design Space of Diffusion-Based Generative Models. In 36th Conference on Nueral Information Processing Systems (NeurIPS), 2022. Kim22GuidedDiffusion D. Kim, Y. Kim, S. J. Kwon, W. Kang, and I.-C. Moon. Refining Generative Process with Discriminator Guidance in Score-based Diffusion Models. arXiv preprint arXiv:2211.17091 [cs.CV], 2022. Kingma18Glow D. P. Kingma and P. Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in neural information processing systems, 2018. Kingma21DDPM D. P. Kingma, T. Salimans, B. Poole, and J. Ho. Variational diffusion models. arXiv: preprint arXiv:2107.00630, 2021. Lam22BDDM M. W. Y. Lam, J. Wang, D. Su, and D. Yu. BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality Speech Synthesis. In ICLR, 2022. Liu22PNDM L. Liu, Y. Ren, Z. Lin, and Z. Zhao. Pseudo Numerical Methods for Diffusion Models on Manifolds. In ICLR, 2022. Lu22DPM_Solver C. Lu, Y. Zhou, F. Bao, J. Chen, C. Li, and J. Zhu. DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Sampling in Around 10 Steps. In NeurIPS, 2022. Mokady23NullTestInv R. Mokady, A. Hertz, K. Aberman, Y. Pritch, and D. Cohen-Or. Null-text Inversion for Editing Real Images using Guided Diffusion Models. In CVPR, 2023. Nichol21DDPM A. Nichol and P. Dhariwal. Improved denoising diffusion probabilistic models. arXiv preprint arXiv:2102.09672, 2021. Nichol22GLIDE A. Nichol, P. Dharwal, A. Ramesh, P. Shyam, P. Mishkin, B. McGrew, I. Sutskever, and M. Chen. GLIDE: Towards Photorealistic image generation and editing with text-guided diffusion models. In ICML, 2022. Rombach22LDM R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, 2022. Rombach22StableDiffusion R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. On High-resolution image synthesis with latent diffusion models. In CVPR, page 10684–10695, 2022. Ronneberger15Unet O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597 [cs.CV], 2015. Saharia22Imagen C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. Denton, S.-K.-S. Ghasemipour, B.-K. Ayan, S. S. Mahdavi, R.-G. Lopes, T. Salimans, J. Ho, D. J. Fleet, and M. Norouzi. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022. Sauer22StyleGAN A. Sauer, K. Schwarz, and A. Geiger. StyleGAN-XL: Scaling StyleGAN to large diverse datasets. In SIGGRAPH, 2022. Shi23DragDiffusion Y. Shi, C. Xue, J. Pan, and W. Zhang. DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing. arXiv:2306.14435v2, 2023. Dickstein15DPM J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. ICML, 2015. Song21DDIM J. Song, C. Meng, and S. Ermon. Denoising Diffusion Implicit Models. In ICLR, 2021. Song21DPM Y. Song, C. Durkan, I. Murray, and S. Ermon. Maximum likelihood training of score-based diffusion models. In Advances in neural information processing systems (NeurIPS), 2021. Song19 Y. Song and S. Ermon. Generative modeling by estimating gradients of the data distribution. In Advances in neural information processing systems (NeurIPS), page 11895–11907, 2019. Song21SDE_gen Y. Song, J. S.-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole. Score-Based Generative Modeling Through Stochastic Differential Equations. In ICLR, 2021. Wallace23EDICT B. Wallace, A. Gokul, and N. Naik. EDICT: Exact Diffusion Inversion via Coupled Transformations. In CVPR, 2023. Verlet67VerletInt L. Verlet. Computer Experiments on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules. Physical Review, 159:98–103, 1967. Skeel93leapfrog R. D. Skeel. Variable Step Size Destabilizes the Stamer/Leapfrog/Verlet Method. BIT Numerical Mathematics, 33:172–175, 1993. GuoqiangIIA23 G. Zhang, K. Niwa, and W. B. Kleijn. On Accelerating Diffusion-Based Sampling Processes by Improved Integration Approximation. arXiv:2304.11328 [cs.LG], 2023. Zhang22DEIS Q. Zhang and Y. Chenu. Fast Sampling of Diffusion Models with Exponential Integrator. arXiv:2204.13902 [cs.LG], 2022. § EXTENSION OF THE UPDATE PROCEDURE OF (<REF>) As an extension of (<ref>), we can also compute z_i-1 by the update below: When i≤ N-1, let the diffusion state z_i-1 be computed in terms of (z_i, z_i+1) as z_i-1 = γ(z_i+1-z_i) + [a_iz_i+ b_iϵ̂_θ(z_i, i)]-γ(z_i/a_i+1-b_i+1/a_i+1ϵ̂_θ(z_i,i)-z_i) =z_i+γ(z_i+1-z_i)-γΔ(t_i→ t_i+1|z_i) + Δ(t_i→ t_i-1|z_i), where γ∈ [0,1]. § TESTED PRE-TRAINED MODELS FOR BDIA-DDIM AND BDIA-EDM
http://arxiv.org/abs/2307.03969v2
20230708125936
Impact of noise on inverse design: The case of NMR spectra matching
[ "Dominik Lemm", "Guido Falk von Rudorff", "O. Anatole von Lilienfeld" ]
physics.chem-ph
[ "physics.chem-ph" ]
University of Vienna, Faculty of Physics, Kolingasse 14-16, AT-1090 Vienna, Austria University of Vienna, Vienna Doctoral School in Physics, Boltzmanngasse 5, AT-1090 Vienna, Austria University Kassel, Department of Chemistry, Heinrich-Plett-Str.40, 34132 Kassel, Germany [email protected] Departments of Chemistry, Materials Science and Engineering, and Physics, University of Toronto, St. George Campus, Toronto, ON, Canada Vector Institute for Artificial Intelligence, Toronto, ON, M5S 1M1, Canada Machine Learning Group, Technische Universität Berlin and Institute for the Foundations of Learning and Data, 10587 Berlin, Germany Despite its fundamental importance and widespread use for assessing reaction success in organic chemistry, deducing chemical structures from nuclear magnetic resonance (NMR) measurements has remained largely manual and time consuming. To keep up with the accelerated pace of automated synthesis in self driving laboratory settings, robust computational algorithms are needed to rapidly perform structure elucidations. We analyse the effectiveness of solving the NMR spectra matching task encountered in this inverse structure elucidation problem by systematically constraining the chemical search space, and correspondingly reducing the ambiguity of the matching task. Numerical evidence collected for the twenty most common stoichiometries in the QM9-NMR data base indicate systematic trends of more permissible machine learning prediction errors in constrained search spaces. Results suggest that compounds with multiple heteroatoms are harder to characterize than others. Extending QM9 by ∼10 times more constitutional isomers with 3D structures generated by Surge, ETKDG and CREST, we used ML models of chemical shifts trained on the QM9-NMR data to test the spectra matching algorithms. Combining both and shifts in the matching process suggests twice as permissible machine learning prediction errors than for matching based on shifts alone. Performance curves demonstrate that reducing ambiguity and search space can decrease machine learning training data needs by orders of magnitude. Impact of noise on inverse design: The case of NMR spectra matching O. Anatole von Lilienfeld August 12, 2023 =================================================================== § INTRODUCTION Current development times of novel molecular materials can span several decades from discovery to commercialization. In order for humanity to react to global challenges, the digitization<cit.> of molecular and materials discovery aims to accelerate the process to a few years. Long experiment times severely limit the coverage of the vastness of chemical space, making the development of self driving laboratories for autonomous robotics experimentation crucial for high throughput synthesis of novel compounds (Fig.<ref> a))<cit.>. To keep the pace of automated synthesis, fast and reliable characterization of reaction products through spectroscopic methods is required, an often manual, time intense and possibly error prone task. One of the most common methods to elucidate the structure of reaction products are nuclear magnetic resonance (NMR) experiments.<cit.> Through relaxation of nuclear spins after alignment in a magnetic field, an NMR spectrum, characteristic of local atomic environments of a compound, i.e. functional groups, can be recorded. In particular, and NMR experiments are routinely used by experimental chemists to identify the chemical structure or relevant groups just from the spectrum. For larger compounds, however, the inverse problem of mapping spectrum to structure becomes increasingly difficult, ultimately requiring NMR of additional nuclei, stronger magnets, or more advanced two-dimensional NMR experiments<cit.>. Computer-assisted structure elucidation algorithms aim to iteratively automatize the structure identification process<cit.>. Current workflows include repeated predictions of chemical shifts for candidate structure inputs through empirical or ab initio methods<cit.>. Albeit accurate even in condensed phase through use of plane-waves <cit.> or QM/MM setup <cit.>, the cost of density functional theory (DFT) calculations severely limits the number of candidate structures that can be tested, leaving the identification of unknown reaction products out of reach for all but the smallest search spaces. Data driven machine learning models leveraging experimental or theoretical NMR databases<cit.> provide orders of magnitude of speedup over ab initio calculations, reaching 1-2 ppm mean-absolute-error (MAE) w.r.t. experiment or theory, respectively<cit.>. However, while the stoichiometry of the reaction product is usually known, e.g. through prior mass spectrometry experiments, the number of possible constitutional isomers exhibits NP hard scaling in number of atoms, quickly spanning millions of valid molecular graphs already for molecules of modest size (Fig.<ref> b)). As such, the inverse problem of inferring the molecular structure from an NMR spectrum still poses a major challenge even for rapid solvers. Recent machine learning approaches tackle the inverse problem using a combination of graph generation and subsequent chemical shift predictions for candidate ranking<cit.>. First explored by Jonas<cit.>, a Top-1 ranking with 57% reconstruction success-rate was achieved using deep imitation learning to predict bonds of molecular graphs. Sridharan et al.<cit.> used online Monte Carlo tree search to build molecular graphs resulting in a similar Top-1 ranking of 57.2%. Huang et al.<cit.> relied on substructure predictions from which complete graphs can be constructed, reaching 67.4% Top-1 accuracy by ranking substructure profiles instead of shifts. A commonality between all algorithms is the subsequent ranking of candidates using spectra matching or other heuristics. Consequently, even though the correct query compound could be detected early, similar candidates might be ranked higher, making the ranking process as critical as the candidate search itself. In this work, we analyse the effectiveness of the NMR spectra matching task encountered in the inverse structure elucidation problem. As stagnating improvements<cit.> in chemical shift predictions due to limited public NMR data aggravate candidate rankings, results suggest that both the prediction error of machine learning models and the number of possible candidates are crucial factors for elucidation success. By systematically controlling the size of chemical search space and accuracy of chemical shifts, we find that higher error levels become permissible in constrained search spaces. Moreover, results indicate that increasing the uniqueness through including both and shifts in the matching process, rather than relying on a single type of shift, significantly reduces ambiguity and enhances error tolerance. To evaluate the spectra matching task throughout chemical compound space, we systematically control the accuracy of 1D and chemical shifts of the 20 most common stoichiometries in QM9-NMR<cit.> by applying distinct levels of Gaussian white noise. Note that while we focus on DFT based 1D NMR in this work, future studies could include experimental data and 2D NMR information. Comparisons amongst stoichiometries suggest that chemical spaces with increasing amounts of heteroatoms and number of constitutional isomers are harder to characterize than others. To test the spectra matching method on a large search space, we extended QM9-NMR to 56k C_7O_2H_10 constitutional isomers. Controlling the chemical shift accuracy through machine learning models trained at increasing training set sizes, performance curves again indicate a trade-off between search space and accuracy. Hence, as less accurate shift predictions become useful, results show that machine learning training data needs can be reduced by multiple orders of magnitude. § THEORY & METHODS §.§ NMR Spectra Matching Consider a query or spectrum with a set of N possible candidate constitutional isomer spectra. We chose the squared euclidean distance as a metric to rank candidate spectra against the query spectrum (see SI Fig.3 for comparison against other metrics): d(δ_q, δ_i) = ∑_j=1^n (δ_q,j - δ_i,j)^2, with δ being a sorted spectrum of n chemical shifts (or ), q being the query, i being the i-th of N candidates, and j being the j-th chemical shift in a spectrum, respectively. To use both and shifts simultaneously for spectra matching, a total distance can be calculated as follows: d_combined = d(δ^13C_q, δ^13C_i) + γ· d(δ^1H_q, δ^1H_i), with γ=64 being a scaling factor determined via cross-validation (see SI Fig.1) to ensure similar weighting. Final rankings are obtained by sorting all candidates by distance. The Top-1 accuracy is calculated as the proportion of queries correctly ranked as the closest spectrum, respectively. §.§ Elucidation performance curves To analyse the spectra matching elucidation accuracy, we systematically control the number of possible candidates N and the accuracy of chemical shifts, respectively. For each constitutional isomer set, we choose 10% as queries and 90% as search pool, respectively. Next, we randomly sample N spectra from the search pool, including the query spectrum. Each sample size is drawn ten times and the Top-1 accuracy averaged across all runs. To control the accuracy of chemical shifts, we apply Gaussian white noise (up to 1 or 10 σ for and , respectively) or use the machine learning error as a function of training set size (c.f. SI Fig.5 for learning curves). For each N and chemical shift accuracy, results are presented as elucidation performance curves (c.f. Fig.<ref> a-b)), showing the elucidation success as a function of chemical shift accuracy in terms of mean absolute error (MAE). §.§ Chemical Shift Prediction We relied on kernel ridge regression (KRR) for machine learning and chemical shifts as presented in Ref.<cit.>. We use a Laplacian kernel and the local atomic Faber-Christensen-Huang-Lilienfeld (FCHL19<cit.>) representation with a radial cutoff<cit.> of 4 . The kernel width and regularization coefficient have been determined through 10-fold cross-validation on a subset of 10'000 chemical shifts of the training set. §.§ Data The QM9-NMR<cit.> dataset was used in this work, containing 130'831 small molecules up to nine heavy atoms (CONF) with chemical shieldings at the mPW1PW91/6-311+G(2d,p)-level of theory. We used the 20 most common stoichiometries (Fig.<ref> b)), having a minimum of 1.7k constitutional isomers available in the dataset. To extend the QM9-NMR C_7O_2H_10 constitutional isomers space, we generated 54'641 SMILES using Surge<cit.>. 3D structures have been generated using ETKDG<cit.> and CREST<cit.> using GFN2-xTB/GFN-FF. Adding the structures to QM9, a total pool size of 56.95k C_7O_2H_10 isomers was obtained. For the training of chemical shift machine learning models, we selected C_8OH_12, C_8OH_10, C_8OH_14, C_7O_2H_8 and C_7O_2H_12 constitutional isomers, yielding a total of 143k and 214k training points, respectively. § RESULTS & DISCUSSION §.§ Spectra matching accuracy with synthetic noise To analyse the influence of noise and number of candidates on the elucidation success, we applied Gaussian noise to and shifts of C_7O_2H_10, C_5N_3OH_7 and C_8OH_14 constitutional isomers, respectively. Fig.<ref> a-b) depicts a sigmoidal shaped trend of Top-1 elucidation accuracies at increasing candidate pool sizes N_QM9 as a function of mean absolute error (MAE). Note that increasing the maximum candidate pool size leads to an offset of the trend towards less permissible errors. A possible explanation is the correlation of the density of chemical space with increasing numbers of candidate spectra N<cit.>. As shift predictions need to become more accurate, limiting N through prior knowledge of the chemical space could be beneficial. Similar findings have been reported by Sridharan et al.<cit.>, noting that brute force enumerations of chemical space lead to worse rankings than constrained graph generation. Note that while the trends in and elucidation are similar, less error is permissible when using shifts. To further reduce the ambiguity, we include both and shifts into the matching problem as per Eq.<ref>. Results suggest 50% and ∼150% more permissible and errors when both spectra are considered in the matching process (Fig.<ref> c)). Similar to how chemists solve the elucidation problem, the inclusion of more distinct properties increases the uniqueness and can improve the elucidation success. §.§ Extrapolating the search space Due to the limited amount of constitutional isomers in databases compared to the number of possible graphs faced during inverse design (Fig.<ref> b)), assessing the chemical shift accuracy for successful elucidation is severely limited. As such, we extrapolate elucidation performance curves to obtain estimates about chemical shift accuracies in candidate pool sizes larger than QM9. We fit each elucidation performance curve (Fig.<ref> a-b)), respectively, using a smoothly broken power law function: f(x) = (1+ (x/x_b)^d)^α with x_b controlling the upper bend and offset, d changing the curvature and α changing the tilt of the function (see SI Fig.2), respectively. The parameters of Eq.<ref> as a function of N can again be fitted using a power law function (see SI Fig.2) and extrapolated to the total number of graphs N_Surge, respectively. Results of the extrapolation (Fig.<ref> a-b) dashed) indicate significant differences in elucidation efficiency among stoichiometries. For instance, C_8OH_14 queries are potentially easier to elucidate than C_5N_3OH_7 structures. Possible reasons are the limited number of C_8OH_14 graphs compared to millions of C_5N_3OH_7 isomers. Moreover, the number of heteroatoms of the C_5N_3OH_7 stoichiometry might hamper the characterization when only relying on or , respectively. Hence, to solve the inverse structure elucidation problem using experimental data of compounds larger than QM9, reducing ambiguities through including both and shifts as well as to reduce the candidate space is critical for elucidation success. §.§ Trends in chemical space To analyse the elucidation efficiency throughout chemical space, we applied the Gaussian noise and extrapolation procedure to the 20 most common stoichiometries in QM9 (Fig.<ref> b)). Fig.<ref> a) shows the MAE required for 95% elucidation success as a function of N_Surge. Results suggest that less error is permissible for stoichiometries with large N_Surge and fewer carbon atoms. As such, using only shifts might not be sufficient to fully characterize the compound. Again, similar to how chemists use multiple NMR spectra to deduct chemical structures, additional information such as shifts are beneficial to extend the information content. In Fig. <ref> b), the error permissiveness of spectra matching using only (see SI Fig.4 for ) versus combining both and is being compared, revealing a linear trend between both. Note that the C_7NOH_7 stoichiometry shows the smallest benefit from adding additional information. Interestingly, a hierarchy for C_7NOH_X stoichiometries of different degrees of unsaturation is visible, indicating an inverse correlation between number of hydrogens and MAE (Fig. <ref> b) green). Similar hierarchies are also observed for other stoichiometries such as C_7O_2H_X and C_8OH_X (Fig. <ref> b) blue and orange). On average, the combination of and for spectra matching increases the error permissiveness of and by 85% and 261% (see SI Fig.4), respectively. §.§ Comparison to machine learned shift predictions To test the elucidation performance using machine learning predictions, we trained and KRR models at increasing training set sizes (see SI Fig.5 for learning curves) and predicted chemical shifts of 56k C_7O_2H_10 constitutional isomers. Results again show similar trends as observed with Gaussian noise (Fig.<ref> a-b)), however, indicate more permissive accuracy thresholds. For instance, KRR predictions at 2 ppm MAE can identify 64% of queries rather than only 17% suggested by the Gaussian noise experiment. The difference could be explained due the systematic, non uniform nature of the QM9<cit.> chemical space, influencing the shape and extrapolation of elucidation performance curves in Fig.<ref>. Moreover, Gaussian noise is applied to all shifts at random compared to possibly more systematic machine learning predictions. Note that the trade-off between error and N is consistent and that the exact parameters will depend on the machine learning model and the finite sampling of constitutional isomer space. To model possible experimental noise on query spectra, we apply Gaussian noise to query spectra and evaluate the elucidation performance of the best performing machine learning model (see insets in Fig.<ref> a-b)). Results indicate a halving of elucidation accuracy when the query spectrum contains up to 2 ppm MAE_Q in and 0.15 ppm MAE in error, respectively. Thus, in the presence of experimental measurement noise even higher prediction accuracies might be necessary. Combining both and spectra for matching improves the elucidation performance up to 90% (Fig.<ref> e)). Again, the combination of spectra for elucidation highlights the effectiveness of reducing the ambiguity of the matching problem by including additional properties. Investigating potential strategies to reduce the constitutional isomer search space, we constrained N based on functional groups (see SI Table 1). Randomly selecting functional groups present in each query, N can be reduced by 50% and 62% on average (see Fig.<ref> d) inset for distributions), respectively. Results in Fig.<ref> c-d) indicate an increase of the elucidation accuracy by 5% in and up to 10% for , respectively, in agreement with the elucidation performance in Fig.<ref> a-b). Note that the knowledge of two functional groups only led to marginal improvements. However, fragmentation could be more beneficial for larger compounds than present in QM9<cit.>, as reported by Yao et al.<cit.>. Using both and shifts on the reduced search space only lead to marginal improvements of 0.5% over the results of the full search space. §.§ Balancing search space and accuracy We use performance curves to analyse the relationship between the elucidation performance of C_7O_2H_10 queries, machine learning prediction errors and candidate pool sizes N. The systematic decay of performance curves (Fig.<ref> red and blue) again demonstrates that constraining N with prior knowledge allows for less accurate shift predictions to be applicable. Extrapolating the performance curves indicates a machine learning MAE of 0.93 ppm to correctly rank 90% of queries out of 56k possible candidates (Fig.<ref> red), 0.02 ppm lower than suggested by Gaussian noise. To reach an MAE of 0.93 ppm, four million training instances are required (Fig.<ref> orange). Using both and shifts requires two orders of magnitude less training data (Fig.<ref> blue). As such, facing expensive experimental measurements and ab initio calculations, more effective inverse structure elucidation could be achieved by balancing machine learning data needs through reduced search spaces and incorporation of additional properties. § CONCLUSION We have presented an analysis of the effectiveness of the NMR spectra matching task encountered in the inverse structure elucidation problem. By systematically controlling the predictive accuracy of and chemical shifts, we found consistent trends throughout chemical compound space, suggesting that higher errors become permissible as the number of possible candidates decreases. Note that while we relied on 1D ab initio NMR data, similar analysis could be performed using 1D or 2D experimental spectra. Applications to the most common constitutional isomers in QM9 highlight that chemical spaces with many heteroatoms are harder to characterize when only relying on a single type of chemical shift. Using both and chemical shifts increases the error permissiveness by 85% and 261% on average, respectively. Machine learning predictions for 56k C_7O_2H_10 compounds showed that using both or shifts increased elucidation success to 90% compared to only 64% and 36% when used alone, respectively. The usefulness of the analysis is expressed via performance curves, showing that training demands can be reduced by orders of magnitude compared to relying on specific shifts alone. We believe that as the accuracy of machine learning models to distinguish spectra is limited, constrained search spaces or inclusion of more distinct properties are necessary to improve candidate rankings. Rather than solely relying on more accurate models, future approaches could include explicit knowledge of chemical reactions, functional groups or data from mass spectrometry, infrared- or Raman spectroscopy<cit.>, respectively. Finally, explicitly accounting for atomic similarities and chemical shift uncertainties via the DP5 probability might further increase the confidence in structure assignments<cit.>. § ACKNOWLEDGEMENT O.A.v.L. has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772834). O.A.v.L. has received support as the Ed Clark Chair of Advanced Materials and as a Canada CIFAR AI Chair. Icons in Fig.<ref> from DBCLS, Openclipart and Simon Dürr from bioicons.com under CC-BY 4.0 and CC0, respectively. § DATA & CODE AVAILABILITY The QM9-NMR dataset is openly available at <https://moldis.tifrh.res.in/data/QM9NMR>. The code and additional data used in this study is available at <https://doi.org/10.5281/zenodo.8126380>. § CONFLICT OF INTEREST The authors have no conflict of interest. § REFERENCES ieeetr
http://arxiv.org/abs/2307.04569v1
20230710140129
Interpreting and generalizing deep learning in physics-based problems with functional linear models
[ "Amirhossein Arzani", "Lingxiao Yuan", "Pania Newell", "Bei Wang" ]
cs.LG
[ "cs.LG", "physics.flu-dyn" ]
Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation Xiangnan He Received January 1, 2015; accepted January 1, 2015 =========================================================================================== ^1Department of Mechanical Engineering, University of Utah, Salt Lake City, UT, USA ^2Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA ^3Department of Mechanical Engineering, Boston University, Boston, MA, USA. ^4School of Computing, University of Utah, Salt Lake City, UT, USA Correspondence: Amirhossein Arzani, University of Utah, Salt Lake City, UT, 84112 Email: [email protected] empty Although deep learning has achieved remarkable success in various scientific machine learning applications, its black-box nature poses concerns regarding interpretability and generalization capabilities beyond the training data. Interpretability is crucial and often desired in modeling physical systems. Moreover, acquiring extensive datasets that encompass the entire range of input features is challenging in many physics-based learning tasks, leading to increased errors when encountering out-of-distribution (OOD) data. In this work, motivated by the field of functional data analysis (FDA), we propose generalized functional linear models as an interpretable surrogate for a trained deep learning model. We demonstrate that our model could be trained either based on a trained neural network (post-hoc interpretation) or directly from training data (interpretable operator learning). A library of generalized functional linear models with different kernel functions is considered and sparse regression is used to discover an interpretable surrogate model that could be analytically presented. We present test cases in solid mechanics, fluid mechanics, and transport. Our results demonstrate that our model can achieve comparable accuracy to deep learning and can improve OOD generalization while providing more transparency and interpretability. Our study underscores the significance of interpretability in scientific machine learning and showcases the potential of functional linear models as a tool for interpreting and generalizing deep learning. Keywords: Explainable Artificial Intelligence (XAI); Scientific machine learning; Functional data analysis; Operator learning; Generalization § INTRODUCTION In recent years, deep learning has emerged as a transformative modeling approach in various science and engineering domains. Deep learning has been successfully used for improving the quality of physical data or improving physics-based models (e.g., superresolution <cit.>, denoising <cit.>, system/parameter identification <cit.>, and closure modeling <cit.>). Additionally, deep learning is a key tool in machine learning enhanced models where the goal of deep learning is to provide a surrogate for the physics-based model, which is useful in many-query and real-time predictive modeling <cit.>. While deep learning has demonstrated impressive success in most of these studies, its inherent black-box nature raises concerns regarding the interpretability of the prediction processes. In physics-based systems, where causal relationships and fundamental first-principle laws play a pivotal role in the results, interpretable models are essential for understanding the phenomena of interest and obtaining trustworthy results. Additionally, it is often desirable for deep learning to generalize and extrapolate beyond the training data once the model is deployed and being used in practice, which is a challenging task in physics-based deep learning <cit.>. The challenges associated with interpretability and generalization in machine learning and deep learning could be overcome with parsimonious and interpretable models <cit.>. In physics-based modeling, this has been achieved with various techniques such as symbolic regression <cit.>, sparse identification of nonlinear dynamics (SINDy) <cit.>, interpretable reduced-order models (ROM) <cit.>, and design of certain coordinate transformations in deep neural networks <cit.>. More broadly, the growing field of interpretable and explainable artificial intelligence (XAI) offers a set of tools aimed at making black-box deep learning models understandable and transparent to humans <cit.>. XAI approaches could be classified as “by-design” and “post-hoc” methods. The aforementioned parsimonious models are by-design where one achieves interpretability by building such features in the machine learning model from the initial design phase, which has been a more common approach in physics-based modeling and scientific machine learning. However, by-design XAI approaches usually lead to a tradeoff between model accuracy and interpretability <cit.>. On the other hand, post-hoc XAI approaches do not compromise model accuracy and instead, explain the model's results in a post-processing step. Standard off-the-shelf XAI approaches have been recently used in various fields such as healthcare <cit.>, aerospace <cit.>, turbulence modeling <cit.>, and material science <cit.>. Interpretable machine learning models also offer the opportunity to improve generalization. However, generalization to out-of-distribution (OOD) input data is a key challenge in scientific machine learning and particularly for deep learning models <cit.>. While standard techniques such as regularization could be used to achieve acceptable in-distribution generalization error (interpolation), OOD generalization (extrapolation) is usually not achieved. Extrapolation poses a serious challenge for black-box deep learning models. As an example, machine-learning based turbulence models trained from equilibrium turbulence databases have failed once applied to non-equilibrium turbulence and transitional flows <cit.>. Interestingly, in certain examples, a simple linear regression model has exhibited remarkable performance in extrapolating training data, with an average error rate merely 5% higher than that of black-box models and even surpassed black-box models in approximately 40% of the scientific machine learning prediction tasks evaluated <cit.>. Here, we propose a post-hoc deep learning interpretation strategy where we build a surrogate for a given trained neural network in the form of generalized linear integral equations. We hypothesize that the interpretable model also improves OOD generalization while providing an approximation to the neural network's predictions. Given that many deep learning tasks in scientific computing deal with mapping between functions and functionals, we leverage theories within the field of functional data analysis (FDA) <cit.>. FDA provides a theoretical framework to effectively model and analyze functional data and has been used in different applications <cit.>. Specifically, we will use functional linear models that enable one to construct analytical mapping involving functions/functionals in the form of interpretable integral equations <cit.>. In scientific machine learning, the learning tasks often involve mapping between high-dimensional data <cit.>. In these high-dimensional settings, the simplest interpretable machine learning model, multivariate linear regression, can fail and more advanced interpretable models such as functional regression have been shown to provide better results <cit.>. Unlike multivariate methods that discard spatial/temporal distribution of the data, functional methods maintain and leverage the intrinsic structure of the data, capturing the temporal or spatial relationships between data points, and therefore can provide a more accurate mapping between the data and uncover valuable insights and patterns. A key challenge in functional regression is the learning of the kernel function that appears in the integral equations. A common approach is expanding the kernel in a certain basis or using a pre-defined fixed kernel <cit.>. Kernel regression is an established statistical modeling approach <cit.> and kernel methods have been used in building nonlinear ROMs <cit.>. In this work, we propose a more flexible framework where the kernel is learned from a library of candidate kernel functions using sparse regression. Once trained on data produced by probing a neural network in a post-hoc fashion, the model will provide an analytical representation in the form of a linear sum of integral equations that not only approximates the neural network's behavior but also provides potential improvement in OOD generalization. The model could be trained based on data probed on the entire training landscape or a subset of the input parameter space to provide a global or local interpretation, respectively. Our proposed approach could also be viewed in the context of operator learning and neural operators <cit.>. Deep learning of operators has recently gained attention in learning mapping between function spaces and has been utilized in various scientific machine learning problems <cit.>. Interestingly, certain neural operators also leverage integral equations and generalized versions of functional linear models <cit.>. In scientific computing, the utilization of Green's functions/operators <cit.> has inspired the incorporation of integral equations into the architecture of deep neural operators. These integral equations enable the learning of operators by mapping between function spaces and belong to the category of functional linear models. In this paper, we present an interpretable machine learning model that builds on several fields such as operator learning, XAI, and FDA. Our paper provides the following major contributions: * We present an early application of functional linear models for post-hoc interpretation of black-box deep learning models in scientific computing. * We provide a new library based approach together with sparse regression for discovering the kernels in the functional linear models. This provides more flexibility compared to prior FDA studies with pre-defined kernels. * The majority of post-hoc XAI approaches used in scientific machine learning are local and explain neural network's predictions in a region local to a desired input. Our proposed approach is a global surrogate model that could also be easily adapted to local interpretation tasks. * We demonstrate that our proposed functional linear model could be trained either on the data itself or by probing a trained neural network. This allows the model to be utilized either as an interpretable operator learning model or as a black-box interpreter. We document training and OOD testing performance in solid mechanics, fluid mechanics, and transport test cases. The rest of this paper is organized as follows. First, in Sec. <ref>, we provide a brief theoretical overview of different approaches such as FDA to motivate the use of integral equations as a surrogate for deep learning. Next, we present our proposed functional linear model (Sec. <ref>) and explain how it is applied for interpretation and OOD generalization in Sec. <ref>. In Sec. <ref>, we present our results for different scientific machine learning test cases. The results and our framework is discussed in Sec. <ref>, and we summarize our conclusions in Sec. <ref>. § METHODS §.§ Theoretical motivation and background Integral equations provide a mathematical framework that encourages the development of interpretable models by explicitly defining the relationships between variables. Our proposed interpretable surrogate model for understanding a deep learning operator is built upon integral equations. These integral equations yield an interpretable generalized linear model that approximates the predictions of the neural network. We provide a brief review of several topics in applied mathematics and machine learning to motivate the idea of using integral equations to build a surrogate for an available deep learning model. §.§.§ Green's functions In many physics-based learning tasks, we are interested in solving partial differential equations. Consider the differential equation L 𝐮 = 𝐟(), where one is interested in solving 𝐮, for different input source terms 𝐟(). Similar to how a linear system of equations 𝐀=𝐛 could be solved as = 𝐀^-1𝐛 using an inverse operator 𝐀^-1, the above differential equation could also be inverted assuming L is a linear operator 𝐮() = L^-1𝐟 = ∫𝐠(,ξ) 𝐟(ξ) dξ , where 𝐠(,ξ) is the Green's function corresponding to the linear operator L and the action of 𝐠(,ξ) on 𝐟 that produces the solution is the Green's operator. Therefore, at least for linear operators one can find an analytical operator representation in the form of an integral equation to map the given input 𝐟 to the output 𝐮. When dealing with a nonlinear operator, it is possible to employ a similar concept to find a linear approximation of the operator, at least within a local context. This motivates extending Green's function concept to a generalized linear integral model that can approximate desired physics-based operator learning problems. Given the existing knowledge about Green functions for linear differential equations <cit.>, we can design the integral equations based on the physical problem we are trying to solve. §.§.§ Convolutional neural networks (CNN) Convolutional neural networks (CNN) are arguably one of the most successful deep learning architectures and are widely used in computer vision <cit.> and mapping 2D image-like field variables in scientific machine learning <cit.>. A key reason behind CNN's success is the fact that each layer is only connected to a local spatial region in the previous layer. This is achieved using convolutional operators that enable CNN to learn hierarchical features. We can write a convolutional integral operation as 𝐮(x,y) = ∫𝐊(ζ, η ) 𝐟(x-ζ,y-η) d ζ dη= ∫𝐊(x - ζ, y - η ) 𝐟( ζ, η) d ζ dη , where the output 𝐮 is generated by convolving the input 𝐟. In CNN, the above operation is done in a discrete manner and the kernel 𝐊 represents the learnable parameters of the network. Although convolution in a CNN involves a more complex process of sliding filters across the input and is accompanied by additional operations in different layers, the fundamental idea of a convolutional integral equation that maps inputs to outputs through convolutions inspires the development of integral equation models. Such models can construct interpretable surrogates for CNNs and other deep learning architectures. Interestingly, these convolution layers perform feature learning that once combined with fully connected layers allow CNN to make predictions. Our proposed approach aligns closely with this strategy. Similarly, we leverage a library of integral functions to facilitate feature learning and prediction is made through linear regression. In CNN, the first version of the above equation involving 𝐟(x-ζ,y-η) is used. However, in building our interpretable model, we will use the equivalent version involving 𝐟( ζ, η) (second form in Eq. <ref>). §.§.§ Radial basis function (RBF) networks Radial basis function (RBF) networks are a neural network generalization of kernel regression or classification <cit.>. RBF networks use radial basis functions as their activation function. For a single hidden layer, the output of an RBF network could be written as 𝐮(𝐱) = ∑_i=1^m w_i exp(- 𝐱- μ_i ^2 / 2 β_i^2 ) , where m different hidden units with different prototype vector μ_i and bandwidth β_i are used with 𝐱 as an input. The weights of the network w_i are optimized to find the final solution. Each RBF influences a set of points in the vicinity of its feature vector μ_i with the distance of influence dictated by the bandwidth β_i. RBF networks are universal function approximators. In our library of integral equations for our surrogate model below, we will also leverage RBFs but in the integral form. That is, the feature vector μ will be replaced with a continuous variable and the integration will be done with respect to this variable. §.§.§ Gaussian process regression (GPR) In Gaussian process regression (GPR), a function is approximated using Gaussian processes, which are specified by a mean function and a covariance function (a kernel) <cit.>. The squared exponential kernel also used in RBF (Eq. <ref>) is a popular choice in GPR. GPR effectively integrates information from nearby points through its kernel function, similar to how we will build our interpretable model below. An intriguing observation is that as the number of neurons in a single hidden layer of a neural network approaches infinity, it evolves into a global function approximator. Similarly, under certain constructs, a neural network with a single hidden layer for a stochastic process converges towards a Gaussian process when the hidden layer contains an infinitely large number of neurons <cit.>. §.§.§ Neural operators Neural operators are an extension of neural networks that enable learning of mapping between infinite-dimensional function spaces <cit.>. Traditional neural networks also learn a mapping between functions (as used in our test cases below) but they require a fixed discretization of the function, whereas neural operators are discretization-invariant. In neural operators, typically, each layer is a linear operator (e.g., an integral equation) and nonlinear activation functions are used to increase the expressive power. The input 𝐯 to each layer is first passed through an integral linear operator ∫𝐊(, ξ) 𝐯(ξ) d ξ using a pre-defined kernel 𝐊 and subsequently a nonlinear activation is applied. Therefore, neural operators also leverage integral equations in their regression tasks but build on neural network architectures for increased expressive power at the price of reduced interpretability. Different designs of the kernel lead to different neural operators. Fourier neural operators (FNO) are a popular and successful example that leverages Fourier transforms and convolutions <cit.>. Graph neural operators <cit.> is another example that uses integral equations similar to the approach we will employ in our model. These operators leverage Monte Carlo sampling techniques to approximate the integral equations. §.§.§ Functional data analysis (FDA) FDA is a mathematical framework that focuses on analyzing data in the form of smooth functions, rather than discrete observations <cit.>. We will be presenting our proposed framework within the context of FDA and therefore more information is provided here. In FDA, the dependent variable, independent variable, or both are functionals. Broadly speaking, we may use FDA to perform mapping and regression when functions are involved either as input or output. Let's consider a mapping between an input functions 𝐟() and output 𝐮, where the output is either a function (scalar/vector field) or a single scalar/vector. In the simplest case mimicking classical regression, for a function output, one might write the output concurrently as 𝐮() = α() + ψ()𝐟(), where α and ψ are bias and regression coefficient functions, respectively. However, this simple concurrent formulation does not consider the potential influence of neighboring points on the solution. Integral equations could be used to overcome this issue and provide a more realistic scenario. We can formulate the regression problem using functional linear models <cit.>. Assuming that all data are mean-centered, a fully functional model is applied to the case where the input and output are both functions 𝐮() = ∫ψ(, ξ) 𝐟(ξ) d ξ , in which the goal is to find ψ. In a separate problem, when the output is a single scalar/vector value, the problem can be formulated as a scalar/vector response model 𝐮 = ∫ψ(ξ) 𝐟(ξ) d ξ . Finally, if the output is a function and the input is a single scalar/vector value the problem can be written as a functional response model 𝐮() = ψ() 𝐟 . In this paper, we will only study the first two cases (Eq. <ref> and <ref>). §.§ Interpretable functional linear models The discussion above highlights the importance of integral equations in learning mappings between function spaces. Although the various methods mentioned earlier may have similarities and can be considered equivalent in certain conditions, our primary focus will be on FDA with functional linear models. To enhance the expressive capacity of functional linear models, we will expand their capabilities in three distinct ways: * First, we will lift the input functions into a higher-dimensional feature space using a pre-specified lifting map 𝒯 (e.g., polynomials) and then define functional linear models for each component of the new feature space separately and use linear superposition to define the final model. Such lifting operations have been successfully used in scientific machine learning models (e.g., <cit.>). * We will use generalized functional linear models <cit.>. Specifically, we will allow a nonlinear function g(.) to be applied to the functional linear models to create outputs such as 𝐮() = g ( ∫ψ(, ξ) 𝐟(ξ) d ξ). * Model selection (choice of the kernel) and tuning its hyperparameters is a difficult task in various forms of kernel regression <cit.>. Instead of pre-specifying the kernels ψ, we will pre-define a library of kernels and associated hyperparameters. Subsequently, we will use sparse regression to select among the library of candidate functions. By specifying the desired level of sparsity, a balance can be achieved between interpretability and accuracy. In the examples explored in this work, we investigate deep learning tasks and corresponding interpretable functional linear models where the input is a 2D function (image) defined on Ω and the output is either a single scalar value, a 1D function (line), or a 2D function (image). These models can be considered as mappings: 𝐟(x,y) →𝐮, 𝐟(x,y) →𝐮(x), and 𝐟(x,y) →𝐮(x,y), respectively. Incorporating the above three modifications to functional linear models and using convolution-like operators for the tasks involving image or line outputs, we write the final models in the most general form as 𝐮(x,y) = ∑_n=1^N∑_m=1^M∑_ℓ=1^L w_n,m,ℓ g_n ( ∫_Ωψ_m(x-ζ, y-η ) 𝒯_ℓ𝐟(ζ,η) d ζ dη) , 𝐮(x) = ∑_n=1^N∑_m=1^M∑_ℓ=1^L w_n,m,ℓ g_n ( ∫_Ωψ_m(x-ζ, η ) 𝒯_ℓ𝐟(ζ,η) d ζ dη) , 𝐮 = ∑_n=1^N∑_m=1^M∑_ℓ=1^L w_n,m,ℓ g_n ( ∫_Ωψ_m(x, y ) 𝒯_ℓ𝐟(x,y) d x d y ) , where a linear combination of L different lifting operations 𝒯 on the inputs, M different kernels ψ, and N different nonlinear functions g are used in writing the final solution. This could be considered as a generalized version of an additive functional regression <cit.>. The goal is to formulate a linear regression problem based on the above analytical equations and training data to find the unknown coefficients w_n,m,ℓ. We do not impose any constraint on the kernel ψ besides being L^2, and therefore inducing Hilbert-Schmidt operators. Below we present a few remarks. * The above models are analytically tractable (interpretable), particularly for small L, M, and N. Sparsity promoting regression will be used in this study to eliminate many of the weights w_n,m,ℓ in a data-driven fashion and improve the interpretability of the final model. The remaining non-zero weights represent a reduced-order representation of the system, which behaves linearly with respect to its parameters w_n,m,ℓ. * In practice, it is not necessary to consider all possible combinations of lifting, kernels, and nonlinearity in the library employed for sparse regression. The library could be defined in a flexible fashion as an arbitrary combination of these operators and the final solution will be a linear superposition of the selected terms in the library. * The kernels ψ provide an interpretation for each term in the model. ψ(x-ζ, y-η) in Eq. <ref> represents the effect of input function 𝐟 at point (ζ,η) on the output function 𝐮 at point (x,y). ψ(x, y) in Eq. <ref> represents a weight for the influence of the input function 𝐟's value at point (x,y) on the output 𝐮 and creates a weighted average. * Most kernels used are equipped with a bandwidth that also needs to be estimated and represents a characteristic problem-dependent length scale and smoothing parameter. Therefore, in our library of candidate terms, for each such kernel, we also consider several candidate bandwidths and treat each kernel separately. Therefore, M in the above equations is typically a large value. For instance, if three different analytical expressions are proposed for the kernels ψ with 20 different potential bandwidths each, then M=60. * To enable approximation of the integrals during training, the above integrals are replaced with discrete sums that approximate the integrals. Therefore, the above models could be compared to a graph neural operator with a single hidden layer <cit.>. However, in our model, various kernels are added linearly in parallel to form the final solution in an analytically simple manner, whereas in neural operators the kernels are added sequentially in different hidden layers, which reduces the interpretability. Additionally, as discussed below, we provide a library approach for kernel selection. * In this work, we only study regression tasks. The proposed approach could be extended to classification tasks with appropriate selection of the nonlinear function g <cit.>, similar to activation function selection in deep learning. To find the coefficients w_n,m,ℓ, a linear regression problem is formulated based on the above integral equation models. Let's assume a set of Q training data pairs (𝐟 and 𝐮) is available and sampled over a set of collocation points x_i and y_j (i=1,…,I, j=1,…,J) defined on a 2D grid (a total of N' = I × J points). The input image 𝐟(x_i,y_j) is mapped to 𝐮(x_i,y_j), 𝐮(x_i), or 𝐮 based on the task. Additionally, let's assume a total of P terms is arbitrarily selected among the L× M × N candidate terms for the library of integral equations. The above integral equations could be numerically evaluated using any numerical integration technique for each of the collocation points. This will result in a system of linear equations in the form 𝐔 = 𝐅𝐖, where 𝐔 is a (QN' ) × 1 column vector of outputs, 𝐅 is a (QN' ) × P regression matrix formed based on evaluating the integrals, and 𝐖 is a P × 1 column vector that contains the unknown coefficients for each integral equation. Sparse regression is used to find the solution by solving the following convex optimization problem min_𝐖‖𝐔 - 𝐅𝐖‖_2 + λ‖𝐖‖_1 , where λ is a sparsity promoting regularization parameter. This optimization problem is solved using a sequential thresholded least-squares algorithm <cit.> to find 𝐖. Increasing λ will reduce the number of active terms in the final integral equation model (improved interpretability) but can reduce the accuracy. Our proposed framework resembles sparse identification of nonlinear dynamics (SINDy) where a similar optimization problem together with a library of candidate terms is used for interpretable data-driven modeling of dynamical systems <cit.>. λ=0.1 was used for all cases unless noted otherwise. In the Appendix (Sec. <ref>), we present an alternative strategy for solving this linear regression problem by presenting the normal equations for functional linear models. The library of candidate terms for each task and test case (defined in the Results Section) is listed in Table <ref>. The range and number of bandwidths β used for each case are also listed. In the more complex tasks, a large number of candidate bandwidths should be selected. Additionally, some of the candidate integral terms were defined based on a truncated domain of integration (local influence), which is a common practice in related methods <cit.>. §.§ Interpreting and generalizing deep learning with an interpretable surrogate Our proposed framework provides an interpretable approach for learning operators and mapping between functions. The entire model is simply a linear combination of integral equations (listed in Table <ref>). The model is trained by assuming a library of candidate integral equations and solving the convex optimization problem in Eq. <ref>, which allows for the determination of coefficients associated with each integral. Subsequently, given any new input function 𝐟(x,y) one could evaluate the integral equations to find the solution 𝐮. The input function's definition is flexible and could be defined either analytically or numerically on an arbitrary grid. A schematic overview is shown in Fig. <ref>. In this manuscript, we demonstrate three application areas for our proposed interpretable model: * Interpreting a trained neural network. Given a trained neural network for mapping between function spaces, we will probe the network using a desired range of the input function to generate pairs of inputs and outputs. Subsequently, the input and output data will be used to build our interpretable surrogate model, which provides an analytical equation that approximates the behavior of the neural network. The neural network could be probed within the entire range of its training landscape or locally to better understand its behavior in a localized landscape (a specific range of training data). Finally, the network could be probed with out-of-distribution input data to understand the network's behavior outside of its training landscape. It should be noted that the network does not necessarily need to be probed with the exact data that the network used for training. * Generalizing a trained neural network. The surrogate model built based on the data from the probed neural network could also be used to improve out-of-distribution generalization. Namely, the simpler and interpretable model is expected to perform better in extrapolation and generalization. Therefore, one could envision a hybrid model where the neural network is utilized to generate the output when the input data falls within the training landscape. On the contrary, when the input data lies outside of the training landscape, the interpretable surrogate model would be invoked. Of course, this will require one to first determine the boundary of the training landscape, which might not be trivial in some problems <cit.>. * An interpretable machine learning model. The interpretable model could be trained directly based on training data to build an interpretable machine learning model in the form of a linear sum of integral equations. § RESULTS First, we will present a simple 1D example to motivate the importance of interpretable machine learning models in the context of generalization. Let's consider the 1D function u(x) = 4x sin(11x) + 3cos(2x)sin(5x). The goal is to learn this function given (x,u) training data. We use 120 training points in the range -0.2<x<0.5, which is considered to be the training region. We are interested in observing how the trained machine learning model performs within the range -1<x<1, which will require generalization to out-of-distribution inputs. A fully connected neural network with three hidden layers and 100 neurons per layer and a Gaussian process regression (GPR) model, which is more interpretable than the neural network are used for training. The results are shown in Fig. <ref>. It can be seen that both models perform well within the training region. However, the black-box neural network model has worse performance outside of the training region compared to GPR. For mild extrapolation outside the training region, the GPR model has relatively good performance compared to the neural network. In the following subsections, we will present different examples to test our proposed interpretable model. In each test case, we will quantify the training error and test error. Throughout the manuscript, by test we imply out-of-distribution test. Errors are quantified for the neural network (NN) model, the interpretable model trained based on the probed trained neural network (Interp NN-driven), and the interpretable model trained based on training data (Interp data-driven). The mean and maximum errors for each case are listed in Table <ref> and <ref>, respectively. The quantified errors are based on point-wise errors quantified from the data aggregated from all pairs of data. In cases below, the input data is a 2D scalar field (image) sampled with a 28×28 resolution. In all cases with the exception of case 1 both input and output fields are normalized. In all examples (except test case 6), the same input training data used in training the neural network was employed for probing the neural network in the NN-driven interpretable model. §.§ Test case 1: predicting strain energy from a heterogeneous material The Mechanical MNIST–Distribution Shift Dataset <cit.> consists of finite element simulation data of a heterogeneous material. As shown in Fig. <ref>a, the elastic modulus distribution of the heterogeneous material is mapped from the bitmap images of the MNIST and EMNIST datasets <cit.>. The elastic modulus values E of the image bitmaps have non-zero values, and lie within a pre-defined range that depends on the distribution. Pixel bitmaps are transformed into a map of elastic moduli by transforming the pixel value b of the bitmap images through the equation E= b/255.0*(s-1)+1. In the Mechanical MNIST–Distribution Shift dataset selected <cit.>, the value s is set to 100 for training data and 25 for testing data. In the Distribution Shift EMNIST dataset, the value s is set to 100 for training data and 10 for testing data. In both cases, equibiaxial extension was applied to the heterogeneous materials through a fixed displacement d=7.0 at all boundaries. In both cases, the training data was randomly split into 80% training and 20% validation. A neural network was used to predict the change of strain energy in the material after the extension. The network consists of five fully connected layers with ReLu activation function. The training data was input as one single batch and the model was trained at a learning rate 0.001 for 50001 epochs. The absolute error distribution is shown in boxplots in Fig. <ref>. Interpretable models improve the test error and the interpretable model trained directly on data has better generalization performance. As also shown in Table <ref> and <ref>, the two different interpretable model strategies exhibit comparable performance on the training data, and their distinction becomes more apparent during testing. Another notable observation is that, in the case of EMNIST data, the data-driven interpretable model exhibits superior performance in training compared to the neural network model and exhibits lower mean and maximum training errors. However, the improvement is much smaller when considering the improvement in generalization error. §.§ Test case 2: predicting maximum velocity from a heterogeneous porous medium In this case, we considered porous media flow in a 2D square domain [0,1] × [0,1] governed by the steady Darcy-Brinkman equation αμ/k = -∇ p + ∇^2 , ∇· = 0 , where μ=10 and a heterogeneous permeability of k(x,y) =0.1exp(Ax) + 1 was used. Free-slip boundary condition (BC) was imposed at the top and bottom walls (Fig. <ref>a) and the flow was driven by a pressure gradient (p=1 and p=0 on the left and right sides, respectively). The porous domain was switched on using the α parameter set to α=1 when √((x-0.5)^2 + (y-Y)^2 )≤ R and α =0 otherwise as shown in Fig. <ref>a. Training data was generated by varying A, Y, and R within 0 ≤ A ≤ 2, -0.1 ≤ Y ≤ 0.15, and 0.09 ≤ R ≤ 0.16. The goal of the deep learning model was to predict maximum velocity given α k(x,y) as the input function. A total of 2250 2D simulations were performed using the open-source finite-element method solver FEniCS <cit.> using ∼70k triangular elements. The data were randomly split into 90% training and 10% validation. Out-of-distribution test data was also generated by running 100 simulations within 0 ≤ A ≤ 2, 0.2 ≤ Y ≤ 0.3, and 0.1225 ≤ R ≤ 0.2025 (note that Y is completely outside the previous range). A convolutional neural network with three layers of convolution (5×5 kernel, 6,16,32 channels, and maxpooling) was used followed by three hidden fully connected layers to map the input 2D function into a single scalar value. 2000 epochs with a learning rate of 5× 10^-4 and a batchsize of 64 were used. In this example, the L1 regularized formulation (Eq. <ref>) did not produce good test results compared to the neural network, and therefore an L2 regularization was used (presented in the Appendix, Sec. <ref>). λ = 10^-9 was the L2 regularization parameter and the preconditioned conjugate gradients method was used for solving the normal equations. The absolute error distribution is shown in boxplots in Fig. <ref>b. In this case, as expected the neural network had a better training error compared to the interpretable models. However, the interpretable models significantly reduce the test error. In this case, the NN-driven and data-driven interpretable models had similar performance in training and testing, which is likely due to the very good neural network training error. §.§ Test case 3: predicting velocity magnitude field from a heterogeneous porous medium The same boundary conditions and setup as test case 2 is considered again (without the Brinkman diffusion term). In this test case, more complex permeability patterns are considered and the goal is to predict the 2D velocity magnitude field (image to image mapping). The input permeability field is defined as k(x,y) = exp (-4Ax) |sin(2π x)cos(2π B y) | + 1, and 0 ≤ A ≤ 1, 0 ≤ B ≤ 4 were used in generating 225 simulations used for training. The data were randomly split into 80% training and 20% validation. The goal was to predict velocity magnitude field (x,y) given k(x,y) as the input function. Out-of-distribution test data were also generated by running 64 simulations within 1 ≤ A ≤ 2 and 4.2 ≤ B ≤ 6. In this case, a fully-connected deep autoencoder was used. The encoder mapped the input 28× 28 field to a latent size of 32 through 4 layers, which was subsequently mapped back to another 28× 28 field by the decoder with a similar structure as the encoder. 2000 epochs with a learning rate of 5× 10^-4 and a batchsize of 64 were used. The results are shown in Fig. <ref>. The contour plots and the error boxplot show that the neural network makes a better qualitative and quantitative prediction within the training regime. However, similar to the last test cases, the interpretable models have better generalization performance as shown in the boxplot (Fig. <ref>b) and Table <ref> and <ref>. §.§ Test case 4: predicting high-fidelity velocity field from low-fidelity velocity field An idealized 2D constricted vessel mimicking blood flow in a stenosed artery was considered similar to our prior work <cit.> as shown in Fig. <ref>. Steady incompressible Navier-Stokes equations were solved for a Newtonian fluid in FEniCS. A parabolic velocity profile was imposed at the inlet and no-slip BC was used at the walls. Training data were generated by performing 400 computational fluid dynamics simulations with different flow rates corresponding to different Reynolds numbers (defined based on average velocity at the inlet) between 15 and 225. In the high-resolution finite element simulations, quadratic and linear shape functions were used for velocity and pressure, respectively (P2-P1 elements) with 41.4k triangular elements. Similarly, low-resolution (low-fidelity) simulations were performed by increasing the viscosity by 20% (representing a dissipative solution with artificial diffusion) and using first order velocity elements (P1-P1 elements) with a total of 536 elements. The goal of the machine learning models is to predict the high-fidelity velocity magnitude field _hres (x,y) from the low-fidelity field _lres (x,y). We focus on a specific region of interest downstream of the stenosis as shown in Fig. <ref>b. Superresolution with machine learning is an active area of research in fluid mechanics <cit.>, and additionally, prior machine learning models have dealt with mapping between multi-fidelity data <cit.>. In our example, both datasets are first interpolated to a structured 28×28 grid. 100 out-of-distribution high-resolution and low-resolution simulations were also performed by varying the Reynolds number between 240 and 300. The neural network architecture was a deep autoencoder similar to test case 3 but with one additional encoder and decoder hidden layer. The training data were randomly split into 80% training and 20% validation. 5000 epochs with a learning rate of 2.5× 10^-5 and a batchsize of 64 were used. Finally, in this test case, instead of using a broad range for the candidate bandwidths in the interpretable model (Table <ref>), we select a focused range estimated based on existing plug-in methods for optimal bandwidth selection. Namely, β_opt = 𝒪( n^-0.3) has been proposed as an optimal bandwidth for Gaussian kernels <cit.>. Considering n=28 as the number of points in each direction, β_opt≈ 0.37. Therefore, we focused on 0.2<β<0.4 in constructing our library (Table <ref>). We verified that this range gave optimal training errors compared to other choices. It should be noted that the problem of optimal bandwidth selection is complicated <cit.>, particularly for our problem where different kinds of kernels and generalized linear models are used. The contour plots and the error boxplots are shown in Fig. <ref>. The neural network produces very accurate training results indistinguishable from the ground-truth. The interpretable model results also mimic the key quantitative and qualitative patterns with minor distinctions visible. However, in this test case, the interpretable models could not improve the out-of-distribution test error compared to the neural network, but similar to other examples it provided an approximation to the neural network behavior in the training regime. §.§ Test case 5: predicting high-fidelity wall shear stress field from low-fidelity velocity data away from the wall In this example, we reconsider the exact same dataset in the constricted artery model of the previous test case. The goal of the machine learning model here is to take the low-fidelity velocity magnitude field in the same region of interest (away from the wall) and predict high-fidelity wall shear stress (WSS) at the bottom wall as shown in Fig. <ref>. In this case, the machine learning model needs to map a 2D scalar field to a 1D scalar field. A deep autoencoder similar to test case 3 was used with the last encoder layer being mapped to a 100 × 1 line instead of an image. 5000 epochs with a learning rate of 2.5× 10^-5 and a 64 batchsize were used. As shown in Fig. <ref>, all methods provide a very accurate estimate for WSS in the training regime. In this case, the distinction between the training and test errors was more pronounced for both neural network and interpretable models. As seen more clearly in Table <ref> and <ref>, in testing, the mean absolute error was considerably reduced for the interpretable models. However, in a relative sense, the peak error during testing was not reduced as much as in some of the previous cases. Another interesting observation was that the data-driven interpretable model also had better training performance compared to the neural network model. §.§ Test case 6: local explanation of neural network predictions in a porous media flow example In all of the previous test cases, we used the exact same data used in training the neural network to train the proposed interpretable models. However, this is not required for the NN-driven Interp model. Namely, the trained neural network could be probed for any desired input to generate pairs of input-output data for training the NN-driven Interp model. In the case where one is interested in explaining the neural network behavior within the training regime, the NN-driven Interp model will be trained with a combination of training and in-distribution test data. In this last test case, we consider the porous media flow in test case 2. We reconsider the problem where the goal is to predict the velocity magnitude (instead of maximum velocity) from the input modified permeability field as shown in Fig. <ref>. The same dataset used in test case 2 is used for training the neural network. An autoencoder mapped the input 28× 28 field to a latent size of 8 through 4 layers, which was subsequently mapped back to another image by a similar decoder. 2000 epochs with a learning rate of 5× 10^-4 and a batchsize of 64 were used. The neural network was trained on the entire dataset explained in test case 2. However, the goal here was to interpret the neural network predictions locally. The position of the porous region was fixed at R=0.02 and Y=-0.1. The trained network was probed for 100 different A values (permeabilities) ranging between 0 ≤ A ≤ 2. This represented a local probing of the neural network with a higher sampling rate than what was used for its training. Finite element simulations were also performed for error quantification. The results are shown in Fig. <ref>. A data-driven Interp model was also trained based on the ground-truth data for comparison. The NN-driven Interp model produced very accurate results and could faithfully explain the neural network behavior in this localized region of the training landscape. An interesting observation is that the NN-driven Interp model slightly improves the training error compared to the neural network model and produces slightly smoother qualitative patterns. The data-driven Interp model produces significantly more accurate results compared to the neural network model. This should not be surprising because in this case the data-driven Interp model was trained based on the ground-truth data in a localized parameter space, whereas the neural network was trained over a larger parameter space. Test errors are not shown in Fig. <ref>b as in this case the Interp models were not trained based on the entire data. Instead, the errors in interpretable model predictions with respect to the neural network predictions are shown. As expected, the NN-driven Interp case matches the NN behavior more closely compared to the data-driven Interp case. The difference between the two interpretable models was less in most previous test cases where global interpretation instead of local interpretation was done. § DISCUSSION In this study, we proposed an interpretable surrogate model that approximates neural network's predictions locally or globally. The interpretable model was in the form of integral equations inspired by functional linear models. We applied our framework to different deep learning models trained on making predictions based on functions and functionals in different physics-based problems. The results demonstrated that in most test cases the interpretable model improved generalization error and even in some cases training error was improved compared to the neural network. Our proposed approach for improving generalization error could be compared to the process of human thinking. When we are asked questions that are outside our knowledge domain we probe the existing knowledge in our brain and we generate an answer to the new questions by using interpretation and reasoning. The proposed NN-driven interpretable model could be perceived within this context where we probe the neural network (our existing knowledge) to build an interpretable model to answer an unknown question (an OOD input). A surprising observation was the improved training error in the interpretable model compared to the deep learning model in some cases. In test case 1 (EMNIST), the mean and peak training errors were reduced by NN-driven and data-driven interpretable models, and in test case 5 the data-driven interpretable model reduced both mean and peak training errors. Also, in some other cases (e.g., test case 3), the maximum training error was reduced. Training error improvement by the NN-driven interpretable model observed in certain cases was a particularly unprecedented result that could be attributed to the smoothing effect in functional linear models, which has been well studied in the context of kernel smoothing <cit.>. It should be noted that in evaluating the training errors, all of the training data that were randomly split into training and validation were used. Except for test case 4, the interpretable models consistently exhibited reduced test error across all cases. This suggests that interpretable models have the potential to enhance predictive accuracy and generalize well to unseen data, showcasing their effectiveness in improving model performance. A notable characteristic of our proposed framework is its inherent flexibility. Our interpretable model could be built either based on the neural network predictions (NN-driven) or the training data without the need for a neural network (data-driven). The former is preferred when an interpretation of a black-box neural network model is desired, while the latter is preferred where improved accuracy (particularly improved OOD generalization) is desired. Our framework also shares many of the advantages offered by other operator learning models. For instance, similar to neural operators our framework once trained could be used to evaluate the solution at any desired input location, rather than being restricted to fixed locations as in traditional neural networks <cit.>. It has been shown in prior operator learning work with DeepONets that a small amount of data can improve their generalization error <cit.>. It has also been demonstrated that sparsity promoting neural network architectures can have good performance with small training data <cit.>. Our proposed interpretable model promotes a sparse solution to the operator learning problem, and therefore even just a small amount of OOD training data is expected to even further improve its OOD generalization, which should be investigated in future work. In related work, deep learning has been used to discover extensions of Green's functions beyond linear operators <cit.>. It is known that approximating Green's functions with neural networks is easier than approximating the action of Green's function on the input (Green's operator) <cit.>. This is consistent with our framework where we learn kernel functions in our integral equations. Another analogy could be made with Koopman operators, which provide a theoretical framework for linearizing dynamical systems <cit.> and have been approximated with black-box neural networks <cit.>. Dynamic mode decomposition (DMD) is an interpretable numerical approximation of the Koopman operator. DMD's interpretability is improved by retaining fewer modes or using sparsity promoting approaches <cit.>. This is similar to our framework where an interpretable model is selected in the form of generalized functional linear models to approximate an unknown operator. Additionally, the tradeoff between accuracy and interpretability is similar where reducing the number of modes in DMD (or the number of integral equations in our framework) increases interpretability at the cost of potentially reduced accuracy. The utilization of a library of candidate models has been leveraged in other scientific machine learning problems. Sparse identification of nonlinear dynamics (SINDy) models a nonlinear dynamical system by constructing analytical equations in the form of a nonlinear system of ordinary differential equations, where the terms in the equations are selected from a pre-specified library <cit.>. As another example, a library of hyperelastic constitutive equations has been used for discovering constitutive models in nonlinear solid mechanics problems <cit.>. Machine learning ROMs have been proposed where a library of proper orthogonal decomposition (POD) modes are used for parameter identification from low-resolution measurement data <cit.>. Another analogy can be drawn with ensemble machine learning models. Neural additive models use an ensemble of parallel neural networks and make final predictions with linear superposition <cit.>. Similarly, our approach could be perceived as an ensemble of approximations to the solution (each integral equation) that is linearly added to build the final solution. Our proposed framework offers the flexibility to be extended to other deep learning tasks. For instance, in certain tasks in addition to a field variable, some physical parameters might also be inputs to the neural network. As an example of an extension to such cases, the scalar response model (Eq. <ref>) could be extended as 𝐮 = r(z) ∫ψ(ξ) 𝐟(ξ) d ξ + γ z similar to the work in <cit.> where z is the additional input parameter, and r and γ are an unknown function and parameter, respectively, that need to be estimated. Leveraging analytical integral equation models in classical physics is another possible extension. An example of analytical integral equations used in fluid dynamics is the Biot-Savart Law used in modeling vortex dynamics <cit.>. This has recently inspired the neural vortex methods, which use neural networks to map vorticity to velocity <cit.>. Our analytical integral equation approach also offers the possibility of solving inverse problems using standard approaches used in solving integral equations <cit.>. Integral equations have been utilized in developing mathematical theories for inverse problems and their numerical solution <cit.>. Another interesting future direction is the comparison of our method's generalization with other operator learning methods such as DeepONets <cit.> and Fourier neural operators <cit.>. Extension to time-dependent problems is another future direction, which is inspired by parabolic Green's functions <cit.>. § CONCLUSION We have proposed an interpretable surrogate model to not only interpret a given neural network but also improve generalization and extrapolation. Our results demonstrate very good and comparable training error and in most cases improved OOD generalization error once compared to the neural network. In a broader sense, our framework suggests the notion of a hybrid machine learning strategy where a trained deep learning model is used for in-distribution predictions and an interpretable surrogate is utilized for OOD predictions. This hybrid strategy could be compared with hybrid finite-element and neural network strategies recently proposed to improve neural network predictions <cit.>. Our study suggests that by leveraging integral equations in the form of generalized functional linear models, we can build more interpretable and explainable scientific machine learning models with a high potential for improved generalization. § ACKNOWLEDGEMENT This work was supported by NSF Award No. 2247173 from NSF's Office of Advanced Cyberinfrastructure. We would like to thank Dr. Emma Lejeune and Dr. Harold Park for discussions related to this work and assistance in using the MNIST/EMNIST datasets. § DATA AVAILABILITY The codes and data used to generate the results in the manuscript will be made publicly available after peer-review. § APPENDIX §.§ Normal equations for functional linear models Here, we present an alternative strategy for finding the kernels in functional linear models using the normal equations, based on the presentation in <cit.>. Let's consider the fully functional model, which was used for image to image mapping in this study (Eq. <ref>) in the scalar form 𝐮() = ∫ψ( ξ, ) f(ξ) d ξ , where given Q pairs of training data, we have grouped them as column vectors u() = [ u_1() , …, u_Q() ]^T and f(ξ) = [ f_1(ξ) , …, f_Q(ξ) ]^T. We expand the unknown kernel function in Eq. <ref> using pre-defined arbitrary bases as ψ( ξ, ) = ∑_i∑_j b_ijω_i(ξ) θ_j() , where ω_i and θ_j are the basses and b_ij are the unknown coefficients that could be grouped into a matrix 𝐁 = [ b_ij]. Our goal is to solve the following least squares problem min_ψ∑_n=1^Q u_n () - ∫ψ( ξ, ) f_n(ξ) d ξ ^2 . Grouping the bases into column vectors ω (ξ) = [ω_1(ξ) , …]^T and θ() = [θ_1() , …]^T, we can rewrite Eq. <ref> in matrix form as 𝐮 () = 𝐙𝐁θ() , where 𝐙 = ∫𝐟(ξ) ω^T(ξ) d ξ. Finally, by defining the matrix 𝐉 = ∫θ() θ^T() d, we can derive the final form of the normal equations 𝐙^T 𝐙𝐁𝐉 = 𝐙^T ∫ u() θ^T() d , where we need to solve for 𝐁. We can also write a similar version of the above equation by reconsidering the optimization problem in Eq. <ref>, which was used for approximating the solution of 𝐔 = 𝐅𝐖 in Sec. <ref>. Instead of introducing an L1-regularized problem as done in Eq. <ref>, we can directly solve this regression problem using the normal equations 𝐅^T 𝐅𝐖 = 𝐅^T 𝐔 . This equation could be solved using a linear solver to find 𝐖. However, in practice the 𝐅^T 𝐅 matrix is highly ill-conditioned and close to singular, therefore an L2 regularization should be added ( 𝐅^T 𝐅 + λ𝐈 ) 𝐖 = 𝐅^T 𝐔 , where λ is the regularization parameter. An increased λ provides a more robust linear system of equations but at the cost of reduced accuracy. Our preliminary investigation has shown that this formulation in certain cases produces more accurate results related to the training error. The OOD generalization error was better in most cases for the L1-regularized problem (except for test case 2). It should also be noted that the L2-regularized problem produces a dense solution where most integral equations in the library will be nonzero, and therefore a less interpretable model is produced. unsrt
http://arxiv.org/abs/2307.05399v1
20230711160144
Domain-Agnostic Neural Architecture for Class Incremental Continual Learning in Document Processing Platform
[ "Mateusz Wójcik", "Witold Kościukiewicz", "Mateusz Baran", "Tomasz Kajdanowicz", "Adam Gonczarek" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Complexity results for matching cut problems in graphs without long induced paths Hoàng-Oanh Le1 Van Bang Le2 Received / Accepted ===================================================================================== Production deployments in complex systems require ML architectures to be highly efficient and usable against multiple tasks. Particularly demanding are classification problems in which data arrives in a streaming fashion and each class is presented separately. Recent methods with stochastic gradient learning have been shown to struggle in such setups or have limitations like memory buffers, and being restricted to specific domains that disable its usage in real-world scenarios. For this reason, we present a fully differentiable architecture based on the Mixture of Experts model, that enables the training of high-performance classifiers when examples from each class are presented separately. We conducted exhaustive experiments that proved its applicability in various domains and ability to learn online in production environments. The proposed technique achieves SOTA results without a memory buffer and clearly outperforms the reference methods. § INTRODUCTION Solutions based on deep neural networks have already found their applications in almost every domain that can be automated. An essential part of them is NLP, the development of which has gained particular momentum with the beginning of the era of transformers <cit.>. Complex and powerful models made it possible to solve problems such as text classification with a previously unattainable accuracy. However, exploiting the capabilities of such architectures in real-world systems requires online learning after deployment. This is especially difficult in dynamically changing environments that require the models to be frequently retrained due to domain or class setup shifts. An example of such environment is Alphamoon Workspace[https://alphamoon.ai/] where the presented architecture will be deployed as a model for document classification since we noticed the emerging need for online learning. We observed that the users' data in document classification process is changing frequently and such shifts often decrease the model accuracy. As a result, we have to retrain the models manually ensuing a time-consuming process. Our goal was to design an effective approach to incremental learning that will be used in a continual learning module of our system (Figure <ref>). Recently, neural architectures have become effective and widely used in classification problems <cit.>. The parameter optimization process based on gradient descent works well when the data set is sufficiently large and fully available during the training process. Otherwise, the catastrophic forgetting <cit.> may occur, which makes neural networks unable to be trained incrementally. Continual learning aims to develop methods that enable accumulating new knowledge without forgetting previously learnt one. In this paper, we present a domain-agnostic architecture for online class incremental continual learning called DE&E (Deep Encoders and Ensembles). Inspired by the E&E method <cit.>, we proposed a method that increases its accuracy, provides full differentiability, and, most importantly, can effectively solve real-world classification problems in production environments. Our contribution is as follows: 1) we introduced a differentiable KNN layer <cit.> into the model architecture, 2) we proposed a novel approach to aggregate classifier predictions in the ensemble, 3) we performed exhaustive experiments showing the ability to learn incrementally and real-world usability, 4) we demonstrate the effectiveness of the proposed architecture by achieving SOTA results on various data sets without a memory buffer. § RELATED WORK §.§ Continual Learning §.§.§ Methods Currently, methods with a memory buffer such as GEM <cit.>, A-GEM <cit.> or DER <cit.> usually achieve the highest performance in all continual learning scenarios <cit.>. Such methods store part of the data in the memory and this data is successively replayed during training on new, unseen examples. However, the requirement to store data in memory disqualifies these methods in many practical applications due to privacy policies or data size <cit.>. This forces attention toward other approaches, such as parameter regularization. The most popular methods in this group include EWC <cit.> and LWF <cit.>. When receiving a new dose of knowledge, these methods attempt to influence the model parameter updating procedure to be minimally invasive. As research shows <cit.>, regularization-based methods fail in class incremental scenarios making them ineffective in many real-world cases. §.§.§ Approaches for NLP Almost all prior works focus on the development of continual learning methods in the computer vision domain <cit.>. Research on continual learning for NLP is limited and, as <cit.> observed, the majority of current NLP methods are task-specific. Moreover, these methods often use a memory buffer <cit.> or relate to the language model itself <cit.>. To address this niche, domain-agnostic approaches have to become much more prevalent in the near future. §.§ Ensemble methods Ensemble methods are widespread in the world of machine learning <cit.>. By using predictions of multiple weak learners, it is possible to get a model that performs surprisingly well overall. Broad adoption of methods <cit.> demonstrates the effectiveness of ensemble techniques in a wide variety of tasks. Ensembles have also been used successfully in the field of continual learning, as evidenced by the BatchEnsemble <cit.> or CN-DPM <cit.>. Other contributions present in literature <cit.> tend to focus strongly on improving model performance rather than increasing model efficiency. Furthermore, ensemble approaches can also be used indirectly through dropout <cit.> or weights aggregation <cit.>. §.§ Mixture of Experts Mixture of Experts (ME) <cit.> is a technique based on the divide and conquer paradigm. It assumes dividing the problem space between several specialized models (experts). Experts are supervised by the gating network that selects them based on the defined strategy. The difference between the ensembles is that ME methods focus on selecting a few experts rather than combining predictions of all available models. ME techniques have found many applications in various domains <cit.>, including continual learning <cit.>, and even nowadays such approaches are widely used in NLP <cit.>. §.§ Real-world NLP systems Over the last few years, the amount of real-world NLP applications has grown rapidly <cit.>. Despite major successes in the real-world application of language technologies such as Google Translate, Amazon Alexa, and ChatGPT, production deployment and maintenance of such models still remain a challenge. Researchers have shown <cit.>, that there are several issues related to maintaining NLP models, including technical limitations, latency, and performance evaluation. However, the crucial problem is the shift of data domain that forces models to be retrained and deployed again over time <cit.>. It is a major limitation in dynamically changing environments where users expect models to quickly adapt to them. Currently, this problem has been tackled in several systems <cit.>, but many of the solutions preclude maintaining model accuracy when training incrementally making them insufficient. § OUR APPROACH §.§ Problem formulation Class incremental continual learning involves training a classification model f(·):𝕏⟼𝕐 on a sequence of T tasks. The model is trained on each task separately (one task at a time). Each task D_t contains data points D_t={(x^1_t, y^1_t), …, (x^N_t_t, y^N_t_t)}, where N_t is length of D_t, x^(i)_t∈ℝ^D, and y^(i)_t∈𝕐_t. 𝕐_t is a label set for task t and 𝕐_t∩𝕐_t' = ∅ for t ≠ t'. We want the model to keep performing well on all previous tasks after each update, and we assume to be working in the most challenging setup <cit.>, where one task consists of data from one class. §.§ Method We present a flexible and effective domain-agnostic architecture that can be used to solve various classification problems. The architecture is presented in Figure <ref>. Feature extractor. The first component of the proposed architecture is a multi-layer feature extractor that transforms input data into the embedding space. It can be described by the following mapping 𝐳=F(𝐱), where 𝐱∈ℝ^D is an input example and 𝐳∈ℝ^M is a M-dimensional embedding. The approach we follow assumes the use of a pre-trained model with frozen parameters. Such a procedure makes it possible to completely prevent the extractor from forgetting knowledge by isolating feature space learning from the classification process. Keys and classifiers. We use an ensemble of N classifiers f_n(·), where each of them maps the embedding into a K-dimensional output vector ŷ_n=f_n(𝐳). With each classifier, there is an associated key vector 𝐤_n∈ℝ^M with the same dimensionality as the embedding. The keys help to select the most suitable models for specialization with respect to the currently processed input example. They are initialized randomly from normal distribution. We use simple single-layer neural networks as classifiers, with fan-in variance scaling as the weight initialization strategy. The network output is activated by a hyperbolic tangent function (tanh). Soft κ-nearest neighbors layer. The standard KNN algorithm is often implemented using ordinary sorting operations that make it impossible to determine the partial derivatives with respect to the input. It removes the ability to use KNN as part of end-to-end neural models. However, it is possible to obtain a differentiable approximation of the KNN model by solving the Optimal Transport Problem <cit.>. Based on this concept, we add a differentiable layer to the model architecture. We call this layer soft κ-nearest neighbors (soft KNN). In order to determine the KNN approximation, we first compute a cosine distance vector 𝐜∈ℝ^N between the embedding and the keys: c_n = 1-cos(𝐳,𝐤_n), where 𝐜𝐨𝐬(·,·) denotes the cosine similarity. Next, we follow the idea of a soft top-κ operator presented in <cit.>, where κ denotes the number of nearest neighbors. Let 𝐄∈ℝ^N× 2 be the Euclidean distance matrix with the following elements: e_n,0=(c_n)^2, e_n,1=(c_n-1)^2. And let 𝐆∈ℝ^N× 2 denote the similarity matrix obtained by applying the Gaussian kernel to 𝐄: 𝐆= exp(-𝐄/σ), where σ denotes the kernel width. The exp operators are applied elementwise to the matrix 𝐄. We then use the Bregman method, an algorithm designed to solve convex constraint optimization problems, to compute L iterations of Bregman projections in order to approximate their stationary points: 𝐩^(l+1)=μ/𝐆𝐪^(l), 𝐪^(l+1)=ν/𝐆^⊤𝐩^(l+1), where l=0,…,L-1, μ=1_N/N, ν=[κ/N,(N-κ)/N]^⊤, 𝐪^(0)=1_2/2, and 1_i denotes the i-element all-ones vector. Finally, let Γ denotes the optimal transport plan matrix and is given by: Γ = diag(𝐩^(L))·𝐆·diag(𝐪^(L)) As the final result γ∈ℝ^N of the soft κ-nearest neighbor operator, we take the second column of Γ multiplied by N i.e. γ=N Γ_:,2. γ is a soft approximation of a zero-one vector that indicates which κ out of N instances are the nearest neighbors. Introducing the soft KNN enables to train parts of the model that were frozen until now. Voting layer. We use both c_n and γ to weight the predictions by giving the higher impact for classifiers with keys similar to extracted features. The obtained approximation γ has two main functionalities. It eliminates the predictions from classifiers outside κ nearest neighbors and weights the result. Since the Bregman method does not always completely converge, the vector κ contains continuous values that are close to 1 for the most relevant classifiers. We make use of this property during the ensemble voting procedure. The higher the κ value for a single classifier, the higher its contribution toward the final ensemble decision. The final prediction is obtained as follows: ŷ=∑_n=1^Nγ_nc_nŷ_n/∑_n=1^Nc_n Training To effectively optimize the model parameters, we follow the training procedure presented in <cit.>. It assumes the use of a specific loss function that is the inner product between the ensemble prediction and the one-hot coded label: ℒ(𝐲, 𝐲̂)=-𝐲^⊤𝐲̂ Optimizing this criterion yields an advantage of using a tanh activation function, significantly reducing catastrophic forgetting <cit.>. Following the reference method, we also use an optimizer that discards the value of the gradient and uses only its sign to determine the update direction. As a result, the parameters are being changed by a fixed step during the training. § EXPERIMENTS §.§ Setup In order to ensure experiment's reproductivity, we evaluated our method on the popular and publicly available data sets. Data sets We use three common text classification data sets with different characteristics - Newsgroups <cit.>, BBC News <cit.>, and Consumer Finance Complaints[Source: <https://huggingface.co./datasets/consumer-finance-complaints>]. The goal of the experiments was to evaluate our method on tasks with with different difficulty levels. We also conducted experiments for audio classification using Speech Commands <cit.> data set. For the evaluation purposes, we selected the 10 most representative classes from the Newsgroups, Complaints and Speech Commands. Finally, we also conducted experiments on the popular MNIST and CIFAR-10 data sets as image domain representatives. The data set summary is presented in Table <ref>. In all experiments we used a train set to train model incrementally, and afterward we performed a standard evaluation using a test set. Feature extractors For all text data sets, we used a Distilbert <cit.>, a light but still very effective alternative for large language models. Next, for Speech Commands, we utilized Pyannote <cit.>, a pretrained model for producing meaningful audio features. For image data sets, we used different extractors. MNIST features were produced by the pretrained VAE and CIFAR-10 has a dedicated BYOL model (see <ref> for more details). §.§ Results The results of the evaluation are presented in Table <ref>. For all setups evaluated, our model performed best improving results of the main reference method (E&E) by up to 3 percent points (pp.). The improvement scale varies across the data sets. We also observed a significant difference in achieved accuracy between the DE&E and the standard continual learning methods. Simple regularization-based methods completely fail in the class incremental scenario. It shows how demanding training the model incrementally is when a set of classes is not fixed, which often takes place in real-world scenarios. Furthermore, our method achieved these results without replaying training examples seen in the past, making it more practical relative to the SOTA memory-based methods (GEM, A-GEM, Replay) that store samples from every class. For the ensemble of 128 classifiers and Speech Commands data set, our architecture achieved an accuracy of more than 59 pp. higher than the best method with a memory buffer. One of the most important hyperparameters of the model is the number of classifiers (experts). To investigate how it affects accuracy, we evaluated our architecture in three variants: small - 64, normal - 128, and large - 1024 classifiers. The evaluation results are presented in Figure <ref>. We observed that increasing the ensemble size translates to higher accuracy, and gain depends on the setup and data characteristics. The most significant improvement was observed on BBC and CIFAR-10 where the large model achieved an accuracy of about 20pp. better than the small one. For the remaining data sets and the analogous setup, the gain was up to 5pp. We explain this phenomenon as the effect of insufficient specialization level achieved by smaller ensembles. If experts are forced to solve tasks that are too complicated they make mistakes often. Increasing the number of experts allows for dividing feature space into simpler sub-tasks. However, such a procedure has natural limitations related to the feature extractor. If features have low quality, increasing the number of experts will be ineffective. To select the optimal ensemble size we suggest using the elbow rule which prevents the model from being overparameterized and ensures reasonable accuracy. However, in general, we recommend choosing larger ensembles that are better suited for handling real-world cases. Since real-world environments require deployed models to quickly adapt to domain shifts, we tested our method in a domain incremental scenario. In such setup, each data batch can provide examples from multiple classes that can be either known or new <cit.>. This way, the model needs to learn incrementally, being prone to frequent domain shifts. As shown in Table <ref>, the proposed method handles both scenarios with comparable accuracy. We observed improved accuracy for BBC News, but reduced for the remaining data sets. Such property can be beneficial when there is limited prior knowledge about the data or the stream is imbalanced <cit.>. We have also investigated the importance of the presented expert selection method. We trained the DE&E method and for each training example, we allowed it to choose random experts (rather than the most relevant ones) with fixed probability p. As shown in Figure <ref>, the selection method has a strong influence on the model performance. Accuracy decreases proportionally to the p over all data sets studied. The proper expert selection technique is crucial for the presented method. It is worth noting that relatively easier data sets suffer less from loss of accuracy than hard ones because even randomly selected experts can still classify the data by learning simple general patterns. In more difficult cases like Newsgroups and Complaints data sets, model performance is comparable to random guessing when p > 0.5. § CONCLUSIONS In this paper, we proposed a domain-agnostic architecture for continual learning with a training procedure specialized in challenging class incremental problems. The presented architecture is based on the Mixture of Experts technique and handles many practical issues related to the deployment of text classification models in non-trivial real-world systems. As our main contribution, we introduced a fully differentiable soft KNN layer and a novel prediction weighting strategy. By conducting exhaustive experiments, we showed improvement in accuracy for all the cases studied and achieved SOTA results without using a memory buffer. This enables an effective and secure training, especially when working with sensitive textual data. The presented architecture is highly flexible, can effectively solve classification problems in many domains, and can be applied to real-world machine learning systems requiring continuous improvement. Such work enables researchers to make further steps toward overrunning many of the current challenges related to language technology applications. § LIMITATIONS The main limitations of the proposed architecture are related to the presence of the frozen feature extractor. The accuracy of the classification module is proportional to the quality of features. Since the ensemble weak learners are single-layer neural networks, the entire feature extraction process relies on a pre-trained model that strongly limits the upper bound of classification accuracy. Such approach reduces the method complexity, but also makes it prone to errors when embeddings have low quality. Achieving accuracy at a satisfactory level, which is crucial in real world systems, requires the use of high quality feature extractors. Currently, plenty of pretrained SOTA models are available for free in domains such as text or image classification, but if such extractor is not available, does not produce reasonable features or is too expensive to use, our architecture may not be the best choice. Another issue is relatively long training time comparing to the reference methods (see <ref>). The introduction of a differentiable soft KNN layer resulted in additional computational effort that clearly impacted the model complexity. This limits the use in low latency systems with machine learning models trained online. § ETHICS STATEMENT The authors foresee no ethical concerns with the work presented in this paper, in particular concerning any kind of harm and discrimination. Since the presented architecture can have a wide range of usages, the authors are not responsible for any unethical applications of this work. § ACKNOWLEDGEMENTS The research was conducted under the Implementation Doctorate programme of Polish Ministry of Science and Higher Education and also partially funded by Department of Artificial Intelligence, Wroclaw Tech and by the European Union under the Horizon Europe grant OMINO (grant number 101086321). It was also partially co-funded by the European Regional Development Fund within the Priority Axis 1 “Enterprises and innovation”, Measure 1.2. “Innovative enterprises, sub-measure 1.2.1. “Innovative enterprises – horizontal competition” as part of ROP WD 2014-2020, support contract no. RPDS.01.02.01-02-0063/20-00. acl_natbib § APPENDIX §.§ Code Code is currently available as a Github repository <https://github.com/mateusz-wojcik-97/domain-agnostic-architecture>. §.§ Computing resources The machine we used had 128 GB RAM, an Intel Core i9-11900 CPU, and an NVIDIA GeForce RTX 3060 GPU with 12GB VRAM. Every experiment was performed using the GPU. §.§ Time complexity The comparison in training time between E&E and DE&E models is shown in Table <ref>. For all evaluated data sets, the training time of our model was higher than the time to train the reference method. The results vary between data sets. The introduction of a differentiable soft KNN layer resulted in additional computational effort that clearly impacted the time complexity of the model. §.§ Implementation details We use PyTorch to both reproduce the E&E results and implement the DE&E method. For text classification we used pretrained Distilbert [https://huggingface.co./distilbert-base-uncased] model and for audio classification we used pretrained Pyannote [https://huggingface.co./pyannote/embedding] model, both from the Huggingface repository. We used a pre-trained ResNet-50 model as the feature extractor for the CIFAR-10 data set. The model is available in the following GitHub repository, <https://github.com/yaox12/BYOL-PyTorch>, and is used under MIT Licence. For MNIST, we trained a variational autoencoder on the Omniglot data set and utilized encoder part as our feature extractor. We based our implementation of the soft KNN layer on the code provided with <https://proceedings.neurips.cc/paper/2020/hash/ec24a54d62ce57ba93a531b460fa8d18-Abstract.html>. All data sets used are public. Baselines We use Naive, LwF <cit.>, EWC <cit.>, SI <cit.>, CWR* <cit.>, GEM <cit.>, A-GEM <cit.> and Replay <cit.> approaches as baselines to compare with our method. We utilize the implementation from Avalanche (<https://avalanche.continualai.org/>), a library designed for continual learning tasks. The main purpose of this comparison was to determine how the proposed method performs against classical approaches and, in particular, against the methods with memory buffer, which gives a significant advantage in class incremental problems. The recommended hyperparameters for each baseline method vary across usages in literature, so we chose them based on our own internal experiments. For a clarity, we keep hyperparameter naming nomenclature from the Avalnache library. For EWC we use lambda = 10000. The LwF model was trained with alpha = 0.15 and temperature = 1.5. For SI strategy, we use lambda = 5e7 and eps = 1e-7. The hyperparameters of the memory based approach GEM were set as follows: memory_strength = 0.5, patterns_per_exp = 5, which implies that with every task, 5 examples will be accumulated. This has a particular importance when the number of classes is large. With this setup and 10 classes in data set, memory contains 50 examples after training on all tasks. Having a large memory buffer makes achieving high accuracy much easier. For the A-GEM method, use the same number of examples in memory and sample_size = 20. All models were trained using Adam optimizer with a learning_rate of 0.0005 and batch_size of 60. We chose cross entropy as a loss function and performed one training epoch for each experience. To fairly compare baseline methods with ensembles, as a backbone we use neural network with a similar number of parameters (as in ensemble). Network architectures for each experimental setup are shown in Table <ref>. All baseline models were trained by providing embeddings produced by feature extractor as an input. Ensembles. We used E&E <cit.> as the main reference method. It uses an architecture similar to that of a classifier ensemble, however the nearest neighbor selection mechanism itself is not a differentiable component and the weighting strategy is different. In order to reliably compare the performance, the experimental results of the reference method were fully reproduced. Both the reference method and the proposed method used exactly the same feature extractors. Thus, we ensured that the final performance is not affected by the varying quality of the extractor, but only depends on the solutions used in the model architecture and learning method. Both E&E and our DE&E were trained with the same set of hyperparameters (excluding hyperparameters in the soft KNN layer for the DE&E). We use ensembles of sizes 64, 128 and 1024. Based on the data set, we used different hyperparameter sets for the ensembles (Table <ref>). The keys for classifiers in ensembles were randomly chosen from the standard normal distribution and normalized using the L2 norm. The same normalization was applied to encoded inputs during lookup for matching keys. Soft KNN. We use the Sinkhorn algorithm to perform the forward inference in soft KNN. The Sinkhorn algorithm is useful in entropy-regularized optimal transport problems thanks to its computational effort reduction. The soft KNN has 𝒪(n) complexity, making it scalable and allows us to safely apply it to more computationally expensive problems. The values of soft KNN hyperparameters were σ = 0.0005 and L = 400. We utilize the continuous character of an output vector to weight the ensemble predictions. It is worth noting that we additionally set the threshold of the minimum allowed soft KNN score to 0.3. It means every element in γ lower than 0.3 is reduced to 0. We reject such elements because they are mostly the result of non-converged optimization and do not carry important information. In this way, we additionally secure the optimization result to be as representative as possible.
http://arxiv.org/abs/2307.04702v1
20230710165949
Vocal Tract Area Estimation by Gradient Descent
[ "David Südholt", "Mateo Cámara", "Zhiyuan Xu", "Joshua D. Reiss" ]
cs.SD
[ "cs.SD", "eess.AS" ]
.png,.jpg,.pdf .eps Coexistence of self-similar and anomalous scalings in turbulent small-scale solar magnetic fields. Svetlana V. Berdyugina August 12, 2023 ================================================================================================== Articulatory features can provide interpretable and flexible controls for the synthesis of human vocalizations by allowing the user to directly modify parameters like vocal strain or lip position. To make this manipulation through resynthesis possible, we need to estimate the features that result in a desired vocalization directly from audio recordings. In this work, we propose a white-box optimization technique for estimating glottal source parameters and vocal tract shapes from audio recordings of human vowels. The approach is based on inverse filtering and optimizing the frequency response of a wave­guide model of the vocal tract with gradient descent, propagating error gradients through the mapping of articulatory features to the vocal tract area function. We apply this method to the task of matching the sound of the Pink Trombone, an interactive articulatory synthesizer, to a given vocalization. We find that our method accurately recovers control functions for audio generated by the Pink Trombone itself. We then compare our technique against evolutionary optimization algorithms and a neural network trained to predict control parameters from audio. A subjective evaluation finds that our approach outperforms these black-box optimization baselines on the task of reproducing human vocalizations. § INTRODUCTION Articulatory synthesis is a type of speech synthesis in which the position and movement of the human articulators, such as the jaw, lips or tongue, are used as control parameters. Because of their inherent interpretability, articulatory features lend themselves well towards fine-grained and flexible user control over the speech synthesizer <cit.>. Articulatory Synthesis is typically implemented as a physical model, which simulates the propagation of air pressure waves through the human vocal tract. A large number of such models have been developed over the years <cit.>. Obtaining the articulatory features that control the physical model is not a trivial problem. Area functions of the vocal tract can be directly measured with magnetic resonance imaging (MRI) <cit.> or electromagnetic articulography (EMA) <cit.>. However, these procedures are time-consuming, susceptible to noise and variations, and require access to specialized equipment. It is therefore desirable to recover the articulatory features directly from a given speech signal. In general, this task is known as Acoustic-to-Articulatory Inversion (AAI). Two main strands of research can be identified: one is data-driven AAI, which seeks to develop statistical methods based on parallel corpora of speech recordings and corresponding MRI or EMA measurements <cit.>. The other takes an analysis-by-synthesis approach to AAI, in which numerical methods are developed to both obtain acoustic features from articulatory configurations, and to invert that mapping to perform AAI <cit.>. In this work, we focus on the analysis-by-synthesis approach and consider the specific articulatory features that make up the control parameters of an articulatory synthesizer. The AAI task is then framed as obtaining control parameters such that the synthesizer reproduces a target recording. This allows a user to reproduce that vocalization with the articulatory synthesizer, and then modify parameters such as vocal tract size, pitch, vocal strain, or vowel placement. Attempts to solve this problem of sound matching, for articulatory synthesis or other types of synthesis, can generally be classified into black-box and white-box methods. Black-box methods do not rely on information about the structure of the synthesizer. A popular approach is to use derivative-free optimization techniques such as genetic algorithms <cit.> or particle swarm optimization <cit.>. These methods are computationally expensive and can take many iterations to converge to a solution. Various deep neural network (DNN) architectures have also been proposed to predict control parameters that match a given sound <cit.>. They require constructing high-quality datasets for training that cover the space of acoustic outputs. White-box methods can improve the sound matching of specific synthesizers by incorporating knowledge of their internal structure. This can be done by reasoning about their underlying physical processes <cit.> or, more recently, making use of auto-differentiation and gradient descent techniques <cit.>. In this work, we propose a gradient-based white-box optimization technique for sound matching vowel sounds with the articulatory synthesizer known as the Pink Trombone (PT)[<https://dood.al/pinktrombone>]. The PT is a web application that uses well-known models of the glottal source and the vocal tract to implement an intuitively controllable vocal synthesizer. Its user interface is depicted in Figure <ref>. Our technique works as follows. First, we decompose a recording into a glottal source signal and an IIR filter with existing inverse filtering methods. We then obtain a vocal tract configuration by minimizing the difference between an analytical formulation of the tract's transfer function <cit.> and the IIR filter with gradient descent. A differentiable implementation of the mapping between control parameters and the vocal tract configuration allows propagation of the error gradient directly to the control parameters. Section <ref> describes the details of our approach. We find that this approach can accurately recover the vocal tract area function on vowel sounds generated by the PT itself. A subjective listening test shows that without requiring any training procedures, the approach outperforms black-box baselines on the task of reproducing real human vocalization. The results of the objective and subjective evaluations are presented in section <ref>. Section <ref> concludes the paper. § METHOD The PT is based on the widely used source-filter model of speech production. The speech output S(z) = G(z)V(z)L(z) is assumed to be the combination of three linear time-invariant (LTI) systems: the glottal flow G, the vocal tract V, and the lip radiation L. The lip radiation is approximated as a first-order differentiator L(z) = 1 - z^-1 and combined with G to form a model of the glottal flow derivative (GFD). Speech is then synthesized by generating a GFD signal (the source) and filtering it through the vocal tract V. In our sound matching approach, a target sound is first decomposed into the GFD source waveform and coefficients for an all-pole filter, using the inverse filtering technique proposed in <cit.>. The control parameters for the PT glottal source are then obtained directly from the GFD waveform. We propose an objective function based on the magnitude response of the all-pole filter that allows estimating the control parameters of the vocal tract with gradient descent. The overall method is illustrated in Figure <ref>. The source code is available online[<https://github.com/dsuedholt/vocal-tract-grad>]. §.§ Inverse Filtering To separate target audio into a GFD waveform and a vocal tract filter, we use the Iterative Adaptive Inverse Filtering method based on a Glottal Flow Model (GFM-IAIF) <cit.>. IAIF methods in general obtain gross estimates of G, V and L with low-order LPC estimation, and then iteratively refine the estimates by inverse filtering the original audio with the current filter estimates, and then repeating the LPC estimation at higher orders. GFM-IAIF makes stronger assumptions about the contribution of the glottis G, and uses the same GFD model as the PT synthesizer (compare section <ref>), making it a good choice for our sound matching task. From GFM-IAIF, we obtain an estimate for the vocal tract filter V in the form of N+1 coefficients a_0,… a_N for an all-pole IIR filter: V(z) = 1/∑_i=0^Na_iz^-i This also gives us an estimate of the GFD waveform by inverse filtering the original audio through V, i.e. applying an all-zero FIR filter with feed-forward coefficients b_i=a_i. §.§ Glottal Source Controls The PT uses the popular Liljencrants-Fant (LF) model to generate the GFD waveform. Originally proposed with four parameters <cit.>, the LF model is usually restated in terms of just a single parameter R_d, which is known to correlate well with the perception of vocal effort <cit.>. R_d can be obtained from the spectrum of the GFD. Specifically, <cit.> finds the following linear relationship between R_d and H_1-H_2, the difference between the magnitudes of the first two harmonic peaks of the GFD spectrum (measured in dB): H_1-H_2 = -7.6 + 11.1R_d We estimate the fundamental frequency F_0 using the YIN algorithm <cit.>, and measure the magnitudes of the GFD spectrum at the peaks closest to F_0 and 2· F_0 to calculate H_1-H_2 and thus R_d. However, the PT does not use R_d as a control parameter directly. Instead, it exposes a “Tenseness” parameter T, which relates to R_d as T = 1 - R_d/3. T is clamped to values between 0 and 1, with higher values corresponding to higher perceived vocal effort. Additionally, the PT adds white noise with an amplitude proportional to 1 - √(T) to the GFD waveform, to give the voice a breathy quality at lower vocal efforts. Figure <ref> shows the glottal source at varying Tenseness values. The estimated control parameters F_0 and Tenseness correspond to the horizontal and vertical axes in the PT's “voicebox” UI element, respectively (see Figure <ref>). §.§ Vocal Tract While the glottal source affects voice quality aspects like breathiness and perceived effort, the vocal tract is responsible for shaping the source into vowels and consonants. In the PT, the vocal tract is treated as a sequence of M+1 cylindrical segments, with M=43. The shape of the vocal tract is then fully described by its area function, i.e. the individual segment cross-sectional areas A_0,…, A_M. Noting that A = π(d/2)^2, the area function may equivalently be described by the segment diameters d_0,…,d_M. An additional, similar model of the nasal tract is coupled to the vocal tract at the soft palate. However, for the open vowel sounds that we are considering, the soft palate is closed and the coupling effect is negligible. In the PT implementation, the soft palate only opens when parts of the vocal tract are fully constricted, therefore here we focus only on the vocal tract itself. §.§.§ Control Model Directly specifying each segment diameter individually does not make for an intuitive user experience and could easily result in very unrealistic, strongly discontinuous area functions. Instead, the PT implements a tiered control model over the vocal tract based on the model proposed in <cit.>. The control model consists of two tiers. The first tier is a tongue defined by a user-specified diameter t_d and position t_p. The tongue shape is modeled as sinusoid shape and modifies a base diameter, representing a neutral area function, into the rest diameter. Figure <ref> illustrates this. The second control tier are constrictions that the user can apply to the rest diameter at any position along the vocal tract. Similarly to the tongue, constrictions are defined by an index, a diameter, and a model of how they affect the rest diameter. There are however two differences between the tongue and the constrictions: Firstly, constrictions are optional, while the tongue is always present. Secondly, constrictions can fully close the vocal tract, at which point noise is inserted to model plosives and fricatives. For this work, we consider only open area functions, meaning that we do not allow constrictions to reduce the diameter below a certain threshold. §.§.§ Estimating the Area Function Propagation of the glottal source through the vocal tract is modeled by implementing each cylindrical segment as a bidirectional, half-sample delay. The half-sample delay is achieved by processing the signal at twice the audio sampling rate and adding up adjacent pairs of samples. At the M inner junctions, the change in cross-sectional area leads to reflection and refraction, described by scattering coefficients calculated from the segment areas as k_m = A_m-A_m-1/A_m+A_m-1 for m=1,… M. This is the well-known Kelly-Lochbaum (KL) model <cit.>. An illustration of a scattering junction is shown in Figure <ref>. The length of the simulated vocal tract results from the number of segments and the sampling rate. Considering a speed of sound in warm air of c ≈ 350 m/s and an audio sampling rate of f_s = 48000 Hz, implementing half-sample delays as unit delays processed at 2· f_s, M + 1 = 44 segments result in a vocal tract length of 44 · 350 / (2·48000) ≈ 0.16 m. This corresponds to the vocal tract of an average adult male <cit.>, giving the PT a male voice. The number of segments and the unit delays are fixed in the PT. The KL model can be implemented more flexibly through e.g. the use of fractional delays <cit.>. An analytical transfer function for the piecewise cylindrical model using unit delays was derived in <cit.>. The formulation can be straightforwardly adapted to half-sample delays by replacing every delay term z^-n with z^-n/2, and then applying an additional factor of 1 + z^-1 to account for the summing of adjacent samples. The transfer function H_KL can then be stated as: H_KL(z) = (1 + z^-1)z^-(M+1)/2∏^M_m=1(1 + k_m)/K_1, 1 + K_1, 2R_L - R_0(K_2, 1 + K_2, 2R_L)z^-1 R_0 and R_L are the amount of reflection at the glottis and lips, respectively, and K∈ℝ^2×2 is defined as follows: K = [ K_1, 1 K_1, 2; K_2, 1 K_2, 2 ] = ∏_m=1^M[ 1 k_mz^-1; k_m z^-1 ] We now wish to find the tongue controls and constrictions such that |H_KL| approximates |V|, the magnitude response of the vocal tract recovered by inverse filtering. In an approach inspired by <cit.>, we now consider the squared error between the log of the magnitude responses for a given angular frequency 0 ≤ω < π: E(ω) = (log_10|H_KL(e^iω)| - log_10|V(e^iω)|)^2 We can then define a loss function that measures how closely a given vocal tract area function matches the recovered vocal tract filter by evaluating the mean squared error over a set of F linearly spaced frequencies: ℒ = 1/F∑_f=0^F-1E(f/Fπ) We can then find the set of controls that minimizes ℒ, meaning that the corresponding area function approximates |V|. A schematic overview of the computation graph is shown in Figure <ref>. § EXPERIMENTS AND RESULTS We first evaluated the performance of our approach on recovering control parameters for sounds generated by the PT itself. These in-domain sounds are guaranteed to be within the possible output space of the PT, and the ground truth parameters are known. We then applied our approach to estimating control parameters for out-of-domain sounds that were not generated by the PT itself. Ground truth parameters that provide an exact match are not known and likely do not exist due to limitations of the model, which makes evaluation challenging. We performed a listening test to compare the quality of our method to previously proposed, model-agnostic black-box sound matching approaches. For all evaluations, parameter ranges were normalized to [0, 1]. Gradient descent was performed for 100 steps, with a step size of 10^-4 and a momentum of 0.9. §.§ Reconstructing PT-generated Audio §.§.§ Setup For the in-domain evaluation, we generated 3000 total sets of control parameters and attempted to recover the vocal tract area. For all examples, F_0 was uniformly sampled from [80, 200], the tenseness from [0, 1], the tongue position t_p from [12, 29] (measured in segments along the tract), and the tongue diameter t_d from [2.05, 3.5]. The range of F_0 roughly covers the pitch range of adult male speech, while the other control parameter ranges cover the range of possible values defined by the PT interface. The parameters were divided in three sets of 1000 examples each. The first set was taken as-is. A random constriction, with position sampled from [0, 43] and diameter sampled from [0.3, 2], was applied to the vocal tract in the second set. Two such independently sampled constrictions were applied in the third set. For each example, we performed the gradient descent optimization twice with different targets: First, with the target response |V| taken directly from the ground truth frequency response (FR) of the original vocal tract. Since this FR is guaranteed to be within the domain of the KL vocal tract model, it should be able to be matched very closely. Second, with the target response |V| recovered by the GFM-IAIF method. This is no longer guaranteed to have an exactly matching vocal tract configuration, so higher deviation is expected. However, since GFM-IAIF and the PT are based on similar assumptions about the source-filter model, the obtained target responses match the ground truth closely enough to be useful in recovering the original control parameters. §.§.§ Results Table <ref> shows the mean absolute error (MAE) for the tongue parameters t_p and t_d for each condition. Additionally, the MAE values for the total area function (i.e. the diameter of each individual segment) and the recovered FR are given. In the simple case of optimizing the true FR with no constrictions applied, the original vocal tract area could be recovered with very high accuracy, often to an exact match. Constrictions introduce more degrees of freedom and result in a less accurately recovered area function, although the FR was still matched very closely. Figure <ref> illustrates how visibly different area functions can have very similar frequency responses. This relates to the transfer function in equation (<ref>) not depending on the area directly, but rather on the resulting reflection coefficients in equation (<ref>). The locations of the area function's extrema, i.e. the segments at which the area changes from growing wider to growing more narrow or vice versa, therefore affect the transfer function more strongly than the specific value of a given area segment. Since the FR obtained by GFM-IAIF might not be able to be matched exactly by the KL model, some constrictions might be used during the estimation even if there were none applied to the original vocal tract, leading to deviations from the true area function. An example of this is shown in Figure <ref>. The range of frequencies most affected by this depend on the choice of LPC estimation in GFM-IAIF; as noted in <cit.>, modeling the glottal contribution as a 3^rd order filter is well-motivated by the LF model and gives balanced results in practice. Due to the presence of this error introduced through inverse filtering, applying constrictions to the ground truth area function had a considerably less pronounced effect on the error metrics when the FR obtained by GFM-IAIF is used as the optimization target. Inverse filtering also noticeably affected the estimation of the glottal source parameters. The MAE for the prediction of the tenseness T∈[0, 1] was 0.013 when the original GFD waveform was used, but rose to 0.057 when the GFD waveform was recovered by inverse filtering. Even the accuracy of the YIN fundamental frequency estimator dropped slightly: the MAE for F_0∈[80, 200] was 0.04 on the original GFD waveform, and 0.44 on the recovered GFD waveform. Applying constrictions had no effect on the glottal source parameter estimation. Grouping the MAE values by the number of constrictions result in values deviating less than 0.5% from the reported global MAE values for both T and F_0. §.§ Sound Matching Human Vocalizations §.§.§ Black-Box Baselines To assess the out-of-domain performance, we performed a subjective evaluation comparing our gradient-based approach against three black-box optimization methods that have previously been used for the task of sound matching. Genetic algorithms <cit.> employ a population of candidate solutions, which evolve through generations by applying genetic operators such as selection, crossover, and mutation. The fittest individuals, evaluated through a fitness function, are more likely to reproduce and pass on their traits to offspring. Particle Swarm Optimization (PSO) <cit.> involves a group of candidate solutions, called particles, that move through the search space to find the global optimum. Each particle's position is updated based on its own best-known position, the best-known position within its neighborhood, and a random component, with the goal of balancing exploration and exploitation. For both the genetic algorithm and PSO, scores for a given set of parameters were calculated as the mean squared error between the mel-spectrogram of the target audio, and the audio generated by the PT with the current parameters. Neural parameter prediction <cit.> uses a neural network to predict parameters from audio. We train a convolutional neural network (CNN) architecture with two convolutional layers separated by a max-pooling layer and followed by three fully connected layers on a dataset of 1,000,000 randomly sampled parameter sets and their corresponding mel-spectrograms. While the in-domain evaluation focused on static vocal tract configurations, the speech samples used in the out-of-domain evaluation are time-varying. For all baselines and the gradient-based approach, this is handled by estimating the parameters on a frame-by-frame basis. To avoid sudden jumps in the area, the predictions of the baselines were smoothed over time by applying a Savitzky-Golay filter <cit.>. For our gradient approach, the estimation of each frame was initialized with the previous frame's prediction. §.§.§ Listening Test We reproduced 6 short recordings of human vocalizations with each method. The originals and the reproductions, and the individual ratings are available online.[<https://dsuedholt.github.io/vocal-tract-grad/>] The pitch, breathiness, and vowel shape of the recordings is time-varying. Each recording came from a different male speaker, since the PT's fixed vocal tract length limits its output to voices that are read as male (see section <ref>). We set up an online multiple-stimulus test on the Go Listen platform <cit.> asking participants to compare the four reproductions to the original recording and rate the reproduction on a scale of 0–100. We included an additional screening question in which we replaced one of the reproductions with the original recording to ensure participants had understood the instructions and were in a suitable listening environment. 22 participants took part in the listening test. Of those, 4 gave the original recording in the screening question a rating lower than 80, so their results were discarded. The results of the listening test are shown in Figure <ref>. Friedman's rank sum test indicates that the ratings differ significantly (p < 0.001), and post-hoc analysis using Wilcoxon's signed-rank test confirms that the reproductions obtained by our proposed approach are rated significantly (p < 0.001) higher than the three baselines, indicating that our method is well-suited for the sound matching task. § CONCLUSION We presented a white-box optimization technique for sound matching vowel sounds with the articulatory synthesizer. We obtained a vocal tract frequency response through inverse filtering and estimated corresponding articulatory control parameters with gradient descent optimization, propagating error gradients through the mapping of control parameters to the vocal tract area function. We showed that our approach can accurately match frequency responses for audio generated by the synthesizer itself. Reproductions of time-varying human vocalizations generated with our approach outperformed black-box baselines in a subjective evaluation. By showing that articulatory features can be estimated with a gradient-based method, our work lays the foundation for further research into end-to-end sound matching of articulatory synthesizers using neural networks, which require the propagation of gradients. Additionally, our method can be expanded to explore the sound matching of more complex synthesizers, including those with two- and three-dimensional vocal tract models and varying vocal tract lengths that are not limited to adult male voices. § ACKNOWLEDGMENTS This work was supported by UK Research and Innovation [grant number EP/S022694/1]. The authors would like to thank Benjamin Hayes, Yisu Zong, Christian Steinmetz and Marco Comunità for valuable feedback. ieeetr
http://arxiv.org/abs/2307.04196v1
20230709150526
Trans-Planckian Effect in $f(R)$ Cosmology
[ "S. Cheraghchi", "F. Shojai", "M. H. Abbassi" ]
gr-qc
[ "gr-qc" ]
http://arxiv.org/abs/2307.04397v1
20230710075924
On Estimating Derivatives of Input Signals in Biochemistry
[ "Mathieu Hemery", "François Fages" ]
q-bio.QM
[ "q-bio.QM", "q-bio.MN" ]
Inria Saclay, Lifeware project-team, Palaiseau, France [email protected] [email protected] On Estimating Derivatives of Input Signals in Biochemistry Mathieu Hemery and François Fages July 8, 2023 ========================================================== The online estimation of the derivative of an input signal is widespread in control theory and engineering. In the realm of chemical reaction networks (CRN), this raises however a number of specific issues on the different ways to achieve it. A CRN pattern for implementing a derivative block has already been proposed for the PID control of biochemical processes, and proved correct using Tikhonov's limit theorem. In this paper, we give a detailed mathematical analysis of that CRN, thus clarifying the computed quantity and quantifying the error done as a function of the reaction kinetic parameters. In a synthetic biology perspective, we show how this can be used to design error correcting terms to compute online functions involving derivatives with CRNs. In the systems biology perspective, we give the list of models in BioModels containing (in the sense of subgraph epimorphisms) the core derivative CRN, most of which being models of oscillators and control systems in the cell, and discuss in detail two such examples: one model of the circadian clock and one model of a bistable switch. § INTRODUCTION Sensing the presence of molecular compounds in a cell compartment is a necessary task of living cells to maintain themselves in their environment, and achieve high-level functions as the result of low-level processes of basic biomolecular interactions. The formalism of chemical reaction networks (CRN) <cit.> is both a useful abstraction to describe such complex systems in the perspective of systems biology <cit.>, and a possible molecular programming language in the perspective of synthetic biology <cit.>. Sensing the concentration levels of molecular compounds has been well-studied in the domain of signal transduction networks. For instance, the ubiquitous CRN structure of MAPK signaling networks has been shown to provide a way to implement analog-digital converters in our cells, by transforming a continuous input signal, such as the concentration of an external hormone activating membrane receptors, into an almost all-or-nothing output signal according to some threshold value of the input, i.e. using a stiff sigmoid as dose-response input-output function <cit.>. The analysis of input/output functions fits well with the computational theory of CRNs. In particular, the Turing-completeness result shown in <cit.> for the interpretation by Ordinary Differential Equations (ODE) of CRNs, possibly restricted to elementary CRNs using mass-action law kinetics and at most bimolecular reactions, demonstrates the generality of this approach to biomolecular programming. Furthermore, it comes with an algorithm to automatically generate a finite CRN for implementing any computable real function. Such a compiler is implemented in our CRN modeling software BIOCHAM <cit.> in several forms, including a theoretically more limited but practically more interesting framework for robust online computation <cit.>. Sensing the derivative of an input molecular concentration is nevertheless beyond the scope of this computational paradigm since it assumes that the input molecular concentrations are stabilized at some fixed values which makes no sense for computing the derivative. Furthermore, it is well-known that the derivative of a computable real function is not necessarily computable <cit.>. We must thus content ourselves with estimating the derivative of an input with some error, instead of computing it with arbitrary precision as computability theory requires. In control theory and engineering, online estimations of input signal derivatives are used in many places. Proportional Integral Derivative (PID) controllers adjust a target variable to some desired value by monitoring three components: the error, that is the difference between the current value and the target, its integral over a past time slice, and its current derivative. The derivative term can improve the performance of the controller by avoiding overshoots and solving some problematic cases of instability. Following early work on the General Purpose Analog Computer (GPAC) <cit.>, the integral terms can be implemented with CRNs using simple catalytic synthesis reactions such as A → A+B for integrating A over time, indeed B(T)=∫_O^T A(t) dt. Difference terms can be implemented using the annihilation reaction A_+ + A_-→∅ which is also used in <cit.> to encode negative values by the difference of two molecular concentrations, i.e. dual-rail encoding. This is at the basis of the CRN implementations of, for instance, antithetic PI controllers presented in <cit.>. For the CRN implementation of PID controllers, to the best of our knowledge three different CRN templates have been proposed to estimate derivative terms. The first one by Chevalier & al. <cit.> is inspired by bacteria's chemotaxis, but relies on strong restrictions upon the parameters and the structure of the input function making it apparently limited in scope. A second one proposed by Alexis & al. <cit.> uses tools from signal theory to design a derivative circuit with offset coding of negative values and to provide analytic expressions for its response. The third one developed by Whitby & al. <cit.> is practically similar in its functioning to the one we study here, differing only on minor implementation details, and proven correct through Tikhonov's limit theorem. This result ensures that when the appropriate kinetic rates tend to infinity, the output is precisely the derivative of the input. In this paper, we give a detailed mathematical analysis of that third derivative CRN and quantify the error done as a function of the reaction kinetic parameters, by providing a first-order correction term. We illustrate the precision of this analysis on several examples, and show how this estimation of the derivative can be actively used with error-correcting terms to compute elementary mathematical functions online. Furthermore, we compare our core derivative CRN to the CRN models in the curated part of <BioModels.net> model repository. For this, we use the theory of subgraph epimorphisms (SEPI) <cit.> and its implementation in BIOCHAM <cit.>, to identify the models in BioModels which contain the derivative CRN structure. We discuss with some details the SEPIs found on two such models: , one of the smallest eukaryotes circadian clock model <cit.>, and , a model of the bistable switch at the restriction point of the cell cycle <cit.>. The rest of the article is organized as follow. In Section <ref>, we provide some preliminaries on CRNs and their interpretation by ODEs. We present the core differentiation CRN in Section <ref>, in terms of both of some of its different possible biological interpretations, and of its mathematical properties. Section <ref> develops the mathematical analysis to bound the error done by that core CRN, and give in Section <ref> some examples to test the validity of our estimation and the possibility to introduce error-correcting terms. Section <ref> is then devoted to the search of that derivative CRN pattern in BioModels repository and the analysis of those matching in two cases. Finally, we conclude on the perspectives of our approach to both CRN design at an abstract mathematical level, and comparison to natural CRNs to help understanding their functions. § PRELIMINARIES ON CRNS §.§ Reactions and Equations The CRN formalism allows us to represent the molecular interactions that occur on a finite set of molecular compounds or species, {X_i}_i ∈ 1 … n, through a finite set of formal (bio)chemical reactions, without prejudging their interpretation in the differential, stochastic, Petri Net and Boolean semantics hierarchy <cit.>. Each reaction is a triplet (R,P,f), also written R P, where R and P are multisets of respectively reactant and product species in {X_i}, and f:_+^n ↦_+ is a kinetic rate function of the reactant species. A CRN is thus entirely described by the two sets of n species and m reactions: {X_i},{R_s P_s}. The differential semantics of a CRN associates positive real valued molecular concentrations, also noted X_i by abuse of notation, and the following ODEs which define the time evolution of those concentrations: d X_i/dt = ∑_s ∈ S (P_s(X_i) - R_s(X_i)) f_s(X), where P_s(X_i) (resp. R_s(X_i)) denotes the multiplicity (stoichiometry) of X_i in the multiset of products (resp. reactants) of reaction s. In the case of a mass action law kinetics, the rate function is a monomial, f_s = k_s ∏_x ∈ R_s x, composed of the product of the concentrations of the reactants by some positive constant k_s. If all reactions have mass action law kinetics, we write the rate constant in place of the rate function R P, and the differential semantics of the CRN is defined by a Polynomial Ordinary Differential Equation (PODE). From the point of view of the computational theory of CRNs, there is no loss of generality to restrict ourselves to elementary CRNs composed of at most bimolecular reactions with mass action law kinetics. Indeed, <cit.> shows that any computable real functions (in the sense of computable analysis, i.e. with arbitrary finite precision by a Turing machine), can be computed by such a CRN, using the dual-rail encoding of real values by the difference of molecular concentrations, x=X_+-X_-. While our compiler ensures that the quantity X_+-X_- behaves properly, it is also important to degrade both of them with an annihilation reaction, X_+ + X_- ∅, to avoid a spurious increase of their concentration. Those annihilation reactions are supposed to be faster than the other reactions of the CRN. The first example given in <cit.> showed the compilation of the cosine function of time, y=cos(t) in the following CRN: A_p → A_p+y_p A_m → A_m+y_m A_m(0)=0, A_p(0)=0 y_m → A_p+y_m y_p → A_m+y_p y_m(0)=0, y_p(0)=1 y_m+y_p ∅ A_m+A_p ∅ The last two reactions are necessary to avoid an exponential increase of the species concentration. The associated PODE is: d(A_m)/dt = y_p-fast*A_m*A_p A_m(0) =0 d(A_p)/dt = y_m-fast*A_m*A_p A_p(0) =0 d(y_m)/dt = A_m-fast*y_m*y_p y_m(0) =0 d(y_p)/dt = A_p-fast*y_m*y_p y_p(0) =1 §.§ CRN Computational Frameworks The notions of CRN computation proposed in <cit.> and <cit.> for computing input/ouput functions, do not provide however a suitable framework for computing derivative functions. Both rely on a computation at the limit, meaning that the output converges to the result of the computation whenever the CRN is either properly initialized <cit.>, or the inputs are stable for a sufficient period of time <cit.>. To compute a derivative, we cannot ask that the input stay fixed for any period of time as this would imply a null derivative. We want the output to follow « at run time » the derivative of the input. Our question is thus as follows. Given an input species X following a time course imposed by the environment X(t), is it possible to perform an online computation such that we can approximate the derivative dX/dt on the concentration of 2 output species using a dual-rail encoding? The idea is to approximate the left derivative by getting back to its very mathematical definition: dX/dt(t) = lim_ϵ→ 0^+X(t)-X(t-ϵ)/ϵ, but how can we measure X(t-ϵ)? § DIFFERENTIATION CRN §.§ Biological intuition using a membrane One biological intuition we may have to measure a value in a previous time is to use a membrane with a fast diffusive constant. Indeed, if we suppose that the input is the outside species, the inside species equilibrates to follow the concentration of the outside one (the input) but also suffers a lag due to the diffusion. Building upon this simple trick leads to the CRN presented in Fig. <ref>. As the derivative may be positive or negative, a dual-rail encoding is used for the derivative. This CRN is mainly equivalent to the derivative block proposed in <cit.> apart from the fact that we suppose (for the sake of clarity) that the input stay positive and no dual-rail encoding is used for it. In the case of a dual-rail encoded input, the two species need to have the same permeability through the membrane, otherwise the delay is not the same for the positive and negative parts. The delay is thus introduced through a membrane under the assumption that the outside concentration is imposed by the environment. This conveniently explains why the kinetic rates are the same for the two monomials in the derivative of , but this is not mandatory. Indeed two other settings can be used to construct such a CRN without relying on a membrane. We could use a phosphorylation and a dephosphorylation reactions where would be the phosphorylated species. Or we could, as in <cit.>, rely on a catalytic production of by and a degradation reaction of . A drawback of these two other implementations is that they need to be tuned to minimize the difference between the rates of the two monomials in the derivative of . Otherwise a proportional constant is introduced between and , and needs to be corrected by adjusting the production rates of D_+ and D_-. However, the membrane implementation also has its own drawback as it requires the reaction → + D_+ to occur through the membrane. We may think of a membrane protein M that mediates this reaction (+ M → + M + D_+). Then, since its concentration is constant, it can simply be wrap up in the kinetic constant of the reaction. Which of this three implementations should be chosen may depend on the exact details of the system to be build. §.§ Core differentiation CRN Our core differentiation CRN schematized in Fig. <ref> is more precisely composed of the following 7 reactions: + D_+ + D_- D_+ ∅ D_- ∅ D_+ + D_- ∅ The diffusion through the membrane is symmetrical with a constant and both activations should have the same constant product k. while the degradation of the outputs should have a rate k. We make the assumption that the outside species is present in large quantity so that its concentration is not affected by the dynamics of the CRN. Under this assumption, the differential semantics is then the same as the one of the differentiation CRN proposed in <cit.>: d/dt = k_diff ( - ) dD_+/dt = k k_diff - k D_+ - fast D_+ D_- dD_-/dt = k k_diff - k D_- - fast D_+ D_- The derivative is encoded as D = D_+ - D_- and hence obeys the equation (using the two last lines of the previous equation): dD/dt = dD_+/dt - dD_-/dt = k k_diff ( - ) - k (D_+ - D_-) dD/dt = k ( - /1/ - D ) In the next section, we prove that is equal to with a delay ϵ, hence giving us our second time point X(t-ϵ), up to the first order in ϵ = 1/. The fractional part of the last equation is thus precisely an estimate of the derivative of as defined in Eq. <ref>, with a finite value for ϵ. It is also worth remarking that such derivative circuits can in principle be connected to compute higher-order derivatives, with a dual-rail encoded input. It is well known that such estimations of higher-order derivatives can be very sensitive to noise and error, and are thus not reliable for precise computation but may be good enough for biological purposes. We will see a biological example of this kind in Section <ref> on a simple model of the circadian clock. § MATHEMATICAL ANALYSIS OF THE QUALITY OF THE ESTIMATION Our first goal is to determine precisely the relation between and when the later is enforced by the environment. Using the first line of Eq. <ref>, we obtain by symbolic integration: (t) = ∫_0^∞exp(- s) (t-s) ds, where we can see that is the convolution of with a decreasing exponential. This convolution is not without reminding the notion of evaluation in the theory of distribution and has important properties of regularisation of the input function. In particular, whatever the input function is, this ensures that the internal representation is continuous and differentiable. The interesting limit for us is when →∞, that is when ϵ = 1/→ 0. In this case, the exponential is neglectable except in a neighbourhood of the current time and supposing that is infinitely differentiable[We also explore in Figures <ref>D and <ref>C what a non analyticity of imply for our model.], we obtain by Taylor expansion: (t) = ∫_0^∞exp(- s) ∑_n=0^∞(-s)^n/n!^(n)(t) ds = ∑_n=0^∞/n!^(n)(t) ∫_0^∞ (-s)^n exp(- s) ds The integral may be evaluated separately using integration by parts and recursion: I_n = ∫_0^∞ (-s)^n exp(- s) ds = -n ϵ I_n-1 = (-1)^n (ϵ)^n+1 n! We thus have: (t) = ∑_n=0^∞/n!^(n)(t) (-1)^n n! ϵ^n+1 = ∑_n (-ϵ)^n ^(n)(t) = (t) - ϵ'(t) + ϵ^2 ”(t) + … (t) = ( t - ϵ) + o(ϵ^2). Using Taylor expansion once again in the last equation somehow formalizes our intuition: the concentration of the internal species follows the time course of the external one with a delay equal to the inverse of the diffusive constant . This validates our formulation of the derivative. Now, it is sufficient to remark that Eq. <ref> has exactly the same form as the first line of Eq. <ref> that we just study in length. Just replace by the estimation of the left derivative, by the output D and the rate constant k instead of . The delay approximation is thus also possible in this step and, introducing the delay τ = 1/k, we immediately obtain a precise expression for D: D(t) = (t-τ) - (t-ϵ-τ)/ϵ+o(ϵ)+o(τ^2). We can see this as the secant approximation of the derivative of with a step size ϵ and a delay τ. Moreover we also know that the residual error on this expression are of first order in ϵ and second order in τ. It is well known in the field of numerical computation that the secant method provides a rather poor approximation, but it has the benefit to be the simplest one, and thus gives here a small size derivative circuit. In the hope of improving the precision, one could implement higher-order methods using several "membranes" to access the value of the function on several time points before performing the adapted computation. Such complexation would however also increase the delay between the input and output function. § VALIDATION ON SIMPLE EXAMPLES §.§ Verification of the delay-approximation In this first subsection, we want to validate the approximation expressed by Eq. <ref>. For this, we focus on the diffusion part of our CRN: ↔. We make numerical simulation for 2 different values of ϵ and 2 different input functions: a sine wave and an absolute value signals. The second allowing us to see how well the delay approximation works in presence of non analyticity. Fig. <ref> shows the response of in that different condition. In panel A, the kinetic constant is very low so we expect our approximation to fail. Indeed, one can see that in addition to having an important delay, the output is strongly smoothed, this tends to average the variation of the input, bringing back to the average value of the input. In panel B the diffusion constant is increased by a factor 10. The delay approximation is now very good and we only expect an error of order ϵ^2 = 10^-2 which can be checked with good accuracy on panel C. Panel D shows a case of a non-differentiable function in which an error of order ϵ = 0.1 is visible shortly after the discontinuity and vanishes in a similar timescale. §.§ Approximation of the derivative Let us now check the behaviour of the derivative circuit. On Fig. <ref>, we can see the response of our derivative circuit for a sine wave and an absolute value input functions. In panels A and B we see that when the first and second order derivatives of the input are smaller than the kinetic reaction rates, the delay approximation gives a very good picture of the response. On a complementary point of view, the panel C shows that in front of singularity, the system adapts after an exponential transient phase with a characteristic time τ = 1/k. §.§ Using signal derivatives for online computations Our main motivation for analyzing the differentiation CRN is to compute a function f of some unknown input signal, (t), online. that is, given a function f, compute a function f((t)) Yet the differentiation CRN only allows us to approximate the derivative of the input signal. The idea is thus to implement the PODE: dY/dt = f'((t)) d /dt, Y(0)=f(X(0) and provide the result online on a set of internal species Y(t) = Y_+ - Y_-. This necessitates to compute the function f' and estimate the derivative of the input. Using the formalism developed in <cit.> we know that there exist an elmentary CRN (i.e. quadratic PODE) computing f'() for any elementary function f and we just have shown that d /dt can be approximated by the differentiation CRN. Therefore, in principle, any elementary function of input signals can be approximated online by a CRN. As a toy example, let us consider the square function, d Y/dt = 2 (D_+ - D_-), and as input, a sine wave offset to stay positive : (t) = 1 + sin(t). The CRN generated by BIOCHAM according to these principles, to compute the square of the input online is: , , + D_+ D_+ ∅ + D_- D_- ∅ + D_+ + D_+ + Y_+ + D_- + D_- + Y_- D_+ + D_- ∅ Y_+ + Y_- ∅ The first three lines implement the derivative circuit, the fourth line implements the derivative of Y and the last line provides the dual-rail encoding. The numerical simulation of this CRN is depicted in Fig. <ref>A One can see that while it effectively computes the square of the input, it also suffers from a strong drift. To verify if this drift comes from the delay between the input and the output, we can compute analytically the output of our network with our approximation of derivative with a delay (see the full computation in Appendix). y(t) = ∫ 2 x(s) x'(s-τ) ds ≃(1+sin(t))^2 + τ t. This is precisely the behaviour that can be seen on the time course of Fig. <ref> A. After the integration of 20 time units, the offset is of order 2 which is exactly what is predicted for a delay τ = 1/k = 0.1. Therefore, while it is always possible to get rid of such errors by increasing , the identification of the cause of the drift, gives us a potentially simpler path to eliminate it: using a representation of the input that is itself delayed: ↔ X_delay, and use this delayed signal as the catalyst for the production of Y_+ and Y_- in the place of . This leads to the CRN given in Appendix (Eq. <ref>) for which numerical integration shows in Fig. <ref>B that we indeed have get rid of the drift, or said otherwise, the correct implementation for online computation is given by: dY/dt = f'((t-τ)) d/dt(t-τ), where the delays has to be equal for the two pieces of the derivative. § BIOLOGICAL EXAMPLES §.§ BioModels repository To explore the possibility that natural biochemical systems already implement a form or another of the core differentiation CRN, one can try to scan the CRN models of the BioModels repository <cit.>. This can be automated with the general graph matching notion of Subgraph EPImorphism (SEPI) introduced in <cit.> to compare CRN models and identify model reduction relationships based on their graph structures. SEPI generalizes the classical notion of subgraph isomorphism by introducing an operation of node merging in addition to node deletion. Considering two bipartite graphs of species and reactions, there exists a SEPI from G_A to G_B if there exists a sequence of mergings[A species (resp. reaction) node can only be merged with another species (resp. reaction) node and the resulting node inherits of all the incoming and outcoming edges of the two nodes.] and deletions of nodes in G_A such that the resulting graph is isomorphic to G_B. More precisely, we used the SEPI detection algorithm of BIOCHAM to scan the curated models in Biomodels (after automatic rewriting with well-formed reactions <cit.>) and check the existence of a SEPI from each model graph to the differentiation CRN graph. Fig. <ref> shows that our small differentiation CRN with 4 species is frequently found in large models. It is thus reasonable to restrict to models with no more than 10 species. Table <ref> lists the models with no more than 10 species in the 700 first models of BioModels that contain our differentiation CRN. The predominance of models exhibiting oscillatory dynamics, and in particular circadian clock models is striking. §.§ Circadian clock Model of the eukaryotes circadian clock proposed by Becker-Weimann & al. <cit.> is among the smallest models of the circadian clock displaying a SEPI reduction toward our differentiation CRN. Its influence graph is depicted in Fig. <ref>A, we also display in red the first SEPI found by BIOCHAM, and in green a second one obtain by enforcing the mapping from the PER/Cry species inside the nucleus to the input of the differentiation CRN. Interestingly, this model has the nucleus membrane separating the species mapped to and the one mapped to in the second SEPI. The oscillatory behavior of this model is shown in panel B. Now, thinking at the mathematical insight that this relation provides, it is quite natural for a CRN implementing an oscillator to evaluate its own derivative on the fly. Actually, when looking at the natural symmetry of the model, we are inclined to think that this CRN may actually be two interlocked CRNs of the derivative circuit, both computing the derivative of the output of the other, as if a second order derivative circuit was closed on itself. This is something we could easily check by imposing restrictions on the SEPI mapping. Enforcing the nucleus PerCRY protein to be mapped on gives us the SEPI shown in green in Fig. <ref>A. To validate the preservation of the function of the derivative CRN given by this SEPI, we can verify that the quantities defined by summing the species that are mapped together are effectively linked by the desired derivative relation. As can be seen in Fig. <ref>B, the agreement is striking. One can even note that the delay of the chemical derivative is the one predicted by our theory. The case of Fig. <ref>C is more complex as this part of the model seems to compute the opposite of the derivative. It is however worth noting that there is absolutely no degree of freedom in our choice of the species used in Fig. <ref>B and C that are entirely constrained by the SEPI given by BIOCHAM. Taking both SEPI together we see that Bmal1^nucleus_protein and Bmal1^cytoplasm_mRNA play symmetrical roles, being the input and derivative of the two displayed SEPI. Given that the second SEPI introduces a negative sign, we may see this as: Bmal1^cytoplasm_mRNA = d/dtBmal1^nucleus_protein Bmal1^nucleus_protein = -d/dtBmal1^cytoplasm_mRNA The solution of this well known equation are the sine and cosine functions, and this perfectly fits the oscillatory behaviour of this CRN. To confirm this hypothesis, we check for the presence of a SEPI from the clock model to the compiled cosine CRN presented in Eq. <ref> which is effectively the case. On the other hand, there is no SEPI relation between the compiled cosine and the derivative circuit. §.§ Bistable switch The model of a bistable switch in the context of the restriction point <cit.> displays a SEPI toward our derivative circuit. This model, presented in Fig. <ref>A, study the Rb-E2F pathway as an example of bistable switch where the presence of a (not modeled) growth factor activates the MyC protein, starting the pathway until it reach the E2F factor that constitute the output of the model. Yao & al. show that once E2F reachs a threshold, its activation becomes self sustained hence the notion of switch. The SEPI given by Biocham is worth of interest as it does not merge any species and only three reactions into one leaving all the other either untouched or deleted, thus indicating that the pattern of the derivative is already well present. Morevoer, MyC is mapped to the input and E2F to one part of the output, reinforcing our intuition that the discovered SEPI is closed from the natural functionning of the CRN. To conform this, we run the simulation as provided by the models and display the derivative of the MyC protein against a scaled difference of the D_+ and D_- species: D = a RB - b E2F where a and b are positive constant adjusted so that D goes to 0 at final time and are of the same magnitude as d MyC/dt. (This gives a=6.3, b=0.063.) Clearly, D is a delayed and smoothed version of the input derivative exactly as our derivative device would provide. § CONCLUSION AND PERSPECTIVES We have presented a mathematical analysis of the core differentiation CRN introduced by Whitby & al. <cit.>. In particular, we have shown that what is computed is an approximation of the left derivative given a small time in the past with a time constant determined by the diffusion constant between the input and its internal representation: ϵ = 1/. Moreover, there is a delay τ due to the computation time that can also be precisely estimated given the rate of activation and degradation of the species encoding the derivative: τ = 1/k. We have shown that such results can be used in some cases to design error-correcting terms and obtain excellent implementations of functions of input signals using an approximation of their derivative on the fly. From a synthetic biology perspective, the derivative CRN may be very relevant in the context of biosensor design, when the test is not be about the presence of some molecular compounds <cit.> but on their variation. A derivative CRN is also needed to construct PID controllers. The derivative control is known for damping the oscillations around the target of the controller but delays are also known for producing such oscillations. Being able to determine and quantify those delays and errors is thus important to optimize the design. This device may also be used to approximate the derivative of an unknown external input in the context of online cellular computing. Once again, delay may produce nefarious artefacts that can easily be avoided when aware of the problem. Furthermore, using the notion of SEPI to scan the biomodels database, we were able to highlight a certain number of CRN models that contain the core differentiation CRN. A high number of these models occur in models presenting oscillations. We have shown on one such example, a circadian clock model, why it makes sense for an oscillator to sense its own derivative, and to reproduce what a mathematician would produce in a more direct way for the most basic oscillatory function: sine and cosine. §.§ Acknowledgment This work benefited from ANR-20-CE48-0002 δifference project grant. plain § APPENDIX: COMPUTATION OF INTEGRATION WITH A DELAY To prove that the drift of the output is a direct consequence of the delay, we first compute the input and the approximate derivative for our choice of input: x(t) = 1+sin(t) x'(t-τ) = cos(t-τ) = cos(t) + τsin(t) + o(τ^2) Then we can compute the output up to the first order: y(t) = ∫ 2 x(s) x'(s-τ) ds = ∫ 2 (1+sin(s)) cos(s) ds + ∫ 2 τ (sin(s)+sin^2(s)) ds = (1+sin(t))^2 + 2 τ∫sin(s)+sin^2(s) ds y(t) ≃(1+sin(t))^2 + τ t Then, to correct the observed drift, we propose to introduce a delay signal and use it in the computation to produce the output species Y_+ and Y_-, with the following CRN: , , + , ∅, + D_+ D_+ ∅ + D_- D_- ∅ + D_+ + D_+ + Y_+ + D_- + D_- + Y_- D_+ + D_- ∅ Y_+ + Y_- ∅
http://arxiv.org/abs/2307.04448v1
20230710095736
Casimir effect of Lorentz-violating charged Dirac in background magnetic field
[ "Ar Rohim", "Apriadi Salim Adam", "Arista Romadani" ]
hep-th
[ "hep-th", "quant-ph" ]
[email protected] Research Center for Quantum Physics, National Research and Innovation Agency (BRIN), South Tangerang 15314, Indonesia Departemen Fisika, FMIPA, Universitas Indonesia, Depok 16424, Indonesia [email protected] Research Center for Quantum Physics, National Research and Innovation Agency (BRIN), South Tangerang 15314, Indonesia [email protected] Department of Physics, Faculty of Science and Technology, Universitas Islam Negeri Maulana Malik Ibrahim Malang, Malang 65144, Indonesia We study the effect of the Lorentz symmetry breaking on the Casimir energy of charged Dirac in the presence of a uniform magnetic field. We use the boundary condition from the MIT bag model to represent the property of the plates. We investigate two cases of the direction of violation, namely, time-like and space-like vector cases. We discuss how the Lorentz violation and the magnetic field affect the structure of the Casimir energy and its pressure. We also investigate the weak and strong magnetic field cases with two different limits, heavy and light masses. Casimir effect of Lorentz-violating charged Dirac in background magnetic field Arista Romadani August 12, 2023 ============================================================================== § INTRODUCTION The Casimir effect representing quantum field effects under macroscopic boundaries was first predicted by H. B. G. Casimir in 1948 <cit.>. He showed that the quantum vacuum fluctuations of the electromagnetic field confined between two parallel plates generate an attractive force. One decade later, in 1958, Sparnaay performed the experimental measurement of the effect, however, with a rough precision <cit.>. He found that the attractive force of the plates does not contradict the theoretical prediction. After his work, the studies showed that the Casimir effect has experimentally confirmed with high precision <cit.>. The Casimir effect itself has many applications in nanotechnology <cit.>, and the theoretical discussion was elaborated in connection to several research areas, for example, cosmology <cit.> and condensed matter physics <cit.>(see e.g. Refs. <cit.> for review). The studies showed that the Casimir effect also arises not only for the electromagnetic field but also for other fields. The geometry of the plate's surface represented by the form of the boundary conditions also determines how the Casimir effect behaves. To discuss the Casimir effect of the scalar field, one can use the Dirichlet boundary conditions of the vanishing field at the surface of the plates. In such a case, one can also employ Neumann and/or mixed boundary conditions <cit.>. However, in the case of the fermion field, one cannot apply such boundaries because the solution for the fermion field is derived from the first-order differential equation. Alternatively, one may use a bag boundary condition that guarantees the vanishing flux at the plate's surface. The well-known form covering this property is the boundary condition from the MIT bag model <cit.> (see Ref. <cit.> for review). The extension of this boundary that includes the role of the chiral angle has been employed in the literature (see e.g. Refs. <cit.>, c.f. Ref. <cit.> for the self-adjoint variant). The Casimir effect phenomenon could be investigated in the system with charged quantum fields under the magnetic field background. With such a system, one can investigate how the charged quantum field couples to the quantum fluctuation <cit.>. On the other hand, the Casimir effect in the system involving a Lorentz violation has also attracted some attention <cit.>. Within the framework of string theories, the spontaneous Lorentz breaking may occur through a dynamic of the Lorentz covariant <cit.>. Such a dynamic will generate interactions to gain nonzero expectation values for Lorentz tensors. This is the same analog as in the Higgs mechanism in the context of the standard model. There are several studies where they investigated a system under Lorentz symmetry breaking and the CPT anomaly <cit.>. Those two phenomena could be possibly measured in the experiment, for instance, the measurements of neutral-meson oscillations <cit.>, the QED test on Penning traps <cit.>, and the baryogenesis mechanism <cit.>. Hence, in this work, we study a system of charged fields involving both Lorentz violation and magnetic field background. In particular, we investigate the Casimir effect of the system under such effects. In our setup, the magnetic field is raised in parallel to the normal plate's surface. We investigate two cases of the Lorentz-violating direction, i.e., timelike and space-like directions. For the spacelike case, we restrict ourselves to discussing the violation in the z-direction only because the Lorentz violation in the x- and y-directions do not affect the behavior of the Casimir energy of a Dirac field <cit.>. In the present study, we employ the boundary condition from the MIT bag model <cit.>, which is originally used to describe quark confinement. It is natural to show that the presence of the boundary condition in the confinement system leads the allowed perpendicular momentum to the boundary surface to be discrete. To discuss the Casimir effect, we investigate the mode expansion of the field consisting of the linear superposition of the positive- and negative-energy solutions associated with the creation and annihilation operators. We can evaluate the vacuum energy by applying the boundary condition to the mode expansion. In the present study, we use the Abel-Plana-like summation <cit.> to extract the divergence of the vacuum energy in the presence of boundary conditions. Then, the Casimir energy can be mathematically obtained by taking the difference between the vacuum energy in the presence of the boundary conditions to that in the absence of ones, where both vacuum energies are infinite, but their difference is finite. The rest structure of this paper is organized as follows. In Sec. <ref>, we describe the model of our system, namely, a Dirac field confined between two parallel plates with a background magnetic field under the Lorentz violation in the quantum field theory framework. In Sec. <ref>, we investigate the Casimir energy. In this section, we derive the solution for the field inside the confinement area following the procedure used in the literature (see e.g., Refs. <cit.>). In Sec. <ref>, we discuss the Casimir pressure. Section <ref> is devoted to our summary. In this paper, we use the natural units so that c=ħ=1. § MODEL We consider the charged Dirac field confined between two parallel plates placed at z=0 and z=ℓ in the presence of a uniform magnetic field. The normal surface of the plates is parallel to the z-axis (see Fig. <ref>). In our model, the Lorentz symmetry is not preserved. The Lagrangian density for such a Dirac field with mass m is given by L=Ψ̅[iγ^μ∂_μ-eγ^μ A_μ- m+iλ u^μ u^νγ_μ∂_ν]Ψ, where Ψ̅(≡Ψγ^0) is the Dirac adjoint, λ is the dimensionless parameter with |λ |≪ 1, A_μ is the four vector potential, and u^μ is an arbitrary constants vector with u^μ u_μ can be 1,-1,0 for time-like, space-like, and light-like, respectively. The Lorentz symmetry breaking is characterized by the last term of Eq. (<ref>); the parameter λ contributes to the violation intensity while the vector u^μ describes the direction one <cit.>. In the present study, we use the 4× 4 gamma matrices γ^μ written in the Dirac representation as follows γ^0= [ I 0; 0 -I ]   and  γ^j= [ 0 σ^j; -σ^j 0 ], where I represents the 2× 2 identity matrix and σ^j is the 2× 2 Pauli matrices. The gamma matrices satisfy the anti-commutation relation as {γ^μ, γ^ν}=η^μν, where η^μν(≡ diag.(1,-1,-1,-1)) is the metric tensor of the Minkowski spacetime. The Dirac field Ψ satisfies the modified Dirac equation as follows [iγ^μ∂_μ-eγ^μ A_μ- m+iλ u^μ u^νγ_μ∂_ν]Ψ=0. The positive-energy solution for the above Dirac equation is given as Ψ^(+)(r)=e^-iω tψ( r)=e^-iω t[ χ_1; χ_2 ], where χ_1 and χ_2 are the upper and lower two-component spinors, respectively. We use ω to represent the eigenenergy of the Dirac field. In our model, the magnetic field is raised in the z-direction B=(0,0,B), where one can choose the corresponding four-vector potential components as follows A_0=A_2=A_3=0     and    A_1=-yB, with B as the magnetic field strength. The geometry of the plates is described by the boundary condition from the MIT bag model as follows <cit.> i n_μγ^μΨ=Ψ, where n_μ is the unit normal inward four-vector perpendicular to the boundary surface. The consequence of this boundary is the vanishing flux or normal probability density at the plate surface n_μ J^μ (≡ n_μΨ̅γ^μΨ)=0. The idea of this boundary is that the mass of the field is written as a function of its position; inside the confinement area, the mass has a finite value and becomes infinite at the boundary surface. Then, one can suppose that the field outside the confinement area vanishes (see Ref. <cit.> for the confinement model of a relativistic particle). While inside the confinement area, the solution for the field is written as the superposition between the left- and right-field components. § CASIMIR ENERGY In this section, we derive the Casimir energy of a Lorentz-violating charge Dirac in a background magnetic field. We study two directions of the Lorentz violation, namely, time-like and space-like vector cases. We derive the solution for the Dirac field inside the confinement area under the boundary condition from the MIT bag model <cit.>. We follow the general procedure given in Refs. <cit.>. Then, we compute the Casimir energy using the Abel-Plana-like summation <cit.> following Refs. <cit.>. In addition, we also investigate the Casimir energy approximately for the case of weak and strong magnetic fields. §.§ Time-like vector case We consider the positive-energy solution for the timelike vector case with u^(t)=(1,0,0,0). In this case, the Dirac equation (<ref>) gives two equations as follows [(1+λ)ω-m]χ^(t)_1=(-iσ^j∂_j+eyBσ^1)χ^(t)_2, [(1+λ)ω+m]χ^(t)_2=(-iσ^j∂_j+eyBσ^1)χ^(t)_1, from which we have the equation for the upper two-component spinor χ^(t)_1 as [(1+λ)^2ω^2-m^2]χ^(t)_1 = (-iσ^j∂_j+eyBσ^1)^2χ^(t)_1 = [-∇^2+e^2y^2B^2-eB(i2y∂_1+σ^3)]χ^(t)_1. In the above equation, we have used the commutation and anti-commutation relations of the Pauli matrices given as [σ^l,σ^m]=2iϵ_lmnσ^n and {σ^m,σ^n}=2δ_mnI, respectively, where δ_mn is a Kronecker delta and ϵ_lmn is a Levi Civita symbol. To find the solution for χ^(t)_1 in Eq. (<ref>), one can propose the following form χ^(t)_1=e^ik_1 xe^ik_3 z F^(t)(y). The presence of the Pauli matrix σ^3 in Eq. (<ref>) leads two independent solution for F^(t)(y) as follows F^(t)_+(y) = [ f^(t)_+(y); 0 ]    and    F^(t)_-(y) = [ 0; f^(t)_-(y) ] . Then, it is convenient to introduce s=± 1 so that the solution for f^(t)_s(y) can be read in a general way as σ^3F^(t)_s(y)=sF^(t)_s(y), and introduce a new parameter as ξ^(+, t)=√(eB)(y+k_1 eB). Then, Eq. (<ref>) can be read as Hermite's equation for arbitrary s as follows [d^2 dξ^(t)2-ξ^(t)2+a^(t)_s]f^(t)_s(y)=0, where a^(t)_s=(1+λ)^2ω^2-m^2-k^2_3+eBs eB. We now have the eigenenergies as[We have used |eB| to avoid imaginary value of ω.] ω^(t)_n',k_3=(1+λ)^-1√(m^2+k^2_3+|eB|(2n'+1)-|eB|s), where we have used a^(t)_s=2n'+1 with n'=0,1,2,3,⋯. The appropriate solution for f^(t)_s(y) with positive value eB that satisfies Hermite's equation (<ref>) is given by f^(t)_s(y)= √((eB)^1/2 2^nn'!(π)^1/2) e^-ξ^2/2H_n'(ξ^(t)), where f^(t)_s(y) has been normalized. The solution for F^(t)_s(y) is characterized by two conditions, namely, n'=n for s=+1 and n'=n-1 for s=-1. They can be written as follows F^(t)_+(y) = [ f^(t)_k_1,n(y); 0 ]    and    F^(t)_-(y) = [ 0; f^(t)_k_1,n-1(y) ] . We note that the eigenenergy for both values of s gives the same expression as ω^(t)_n, k_3=(1+λ)^-1√(m^2+k^2_3+2n|eB|), where n=0,1,2,3,⋯ is the Landau level. Then, we can finally derive the spatial solution for the right-moving field component as follows ψ^(+, t)_k_1,n,k_3 ( r) = e^ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n,k_3+m)) ×[C_1 [ ((1+λ)ω^(t)_n,k_3+m) f^(t)_k_1, n(y); 0; k_3f^(t)_k_1, n(y); √(2neB) f^(t)_k_1, n-1(y) ] + C_2 [ 0; ((1+λ)ω^(t)_n,k_3+m) f^(t)_k_1, n-1(y); √(2neB) f^(t)_k_1, n(y); -k_3f^(t)_k_1, n-1(y) ]],   for n≥ 1 ψ^(+, t)_k_1,0,k_3 ( r)= e^ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C_0 f^(t)_k_1, 0(y) [ (1+λ)ω^(t)_0, k_3+m; 0; k_3; 0 ],   for n=0, where C_0, C_1 and C_2 are the complex coefficients and f^(t)_k_1, n(y) is given by f^(t)_k_1, n(y)=√((eB)^1/2 2^n n!π^1/2)exp[-eB 2(y+k_1 eB)^2]H_n[√(eB)(y+k_1 eB)], with H_n(ξ) is the Hermite polynomial. In a similar way, we can obtain the solution for the left-moving field component as follows ψ^(+, t)_k_1,n,-k_3( r) = e^ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n,k_3+m)) ×[C̃_1 [ ((1+λ)ω_nk_3+m) f^(t)_k_1, n(y); 0; -k_3f^(t)_k_1, n(y); √(2neB) f^(t)_k_1, n-1(y) ] + C̃_2 [ 0; ((1+λ)ω_nk_3+m) f^(t)_k_1, n-1(y); √(2neB) f^(t)_k_1, n(y); k_3f^(t)_k_1, n-1(y) ]],   for n≥ 1 ψ^(+, t)_k_1,0,-k_3 ( r)= e^ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C̃_0 f^(t)_k_1, 0(y) [ (1+λ)ω^(t)_0, k_3+m; 0; -k_3; 0 ] ,   for n=0, where C̃_0, C̃_1 and C̃_2 are the complex coefficients. The total field solution is given by the linear combination between the left- and right-moving field components as follows[In the case of preserved Lorentz symmetry (λ=0), the solution is completely the same as that of Ref. <cit.>.] ψ^(+, t)_k_1,n, k_3( r)=ψ^(+, t)_k_1,n,k_3( r)+ψ^(+, t)_k_1,n,-k_3( r), where we use k_3 l to represent the allowed momentum in the system, as we will see below. For arbitrary non-zero complex coefficients, we have the constraint for momentum component in the z-direction (k_3) in the case of n≥ 0 as follows mℓsin(k_3ℓ)+k_3 ℓcos (k_3ℓ)=0. The detailed derivation is given in Appendix <ref>. The solution for Eq. (<ref>) is given by k_3l with l=1,2,3,⋯, which indicates that the allowed momentum k_3 must be discrete. As a consequence, the energy of the field under the MIT boundary condition must also be discrete as follows ω^(t)_n,l=(1+λ)^-1√(m^2+k^2_3l+2n|eB|). These properties not only hold for positive-energy solutions but also for the negative-energy counterpart. One can see that the magnetic field and parameter λ do not affect the structure of the momentum constraint. In this context, the former is similar to that in the absence of the magnetic field <cit.> while the latter is similar to that of the preserved Lorentz symmetry. We now write down a mode expansion of the Dirac field in the time-like vector case under the boundary condition from the MIT bag model as Ψ^(t)_k_1,n,l(r)= ∑^∞_n=0∑^∞_l=1∫^∞_-∞d k_1 [â_k_1,n,lΨ^(+,t)_k_1,n,l(r)+ b̂^†_k_1,n,lΨ^(-,t)_k_1,n,l(r) ], where Ψ^(±,t)_k_1,n,l(r) are the positive (+) and negative (-) energy solutions. See Appendix <ref> for the detailed expression of the negative-energy solution. The annihilation and creation operators in Eq. (<ref>) satisfy the following anti-commutation relations {â_k_1,n,l,â^†_k'_1,n',l'}={b̂_k_1,n,l,b̂^†_k'_1,n',l'}=δ_nn'δ_ll'δ(k_1-k'_1), and the other anticommutation relations vanish. The Dirac field satisfies orthonormality conditions as follows ∫ d x_⊥∫^ℓ_0 dz ψ^(j,t)†_k_1,n, l( r)ψ^(j',t)_k'_1,n', l'( r)=δ_jj'δ_nn'δ_l l'δ(k_1-k'_1),    j,j'=0,1,2 , by which we can obtain the relations of the complex coefficients of the field. We use x_⊥≡ (x,y) to represent the sub-spatial coordinate parallel to the normal plates' surface. From the above Lagrangian density (<ref>), one can obtain the Hamiltonian density in the time-like vector case as follows H^(t)=-Ψ̅^(t)[iγ^j∂_j-eγ^μ A_μ- m]Ψ^(t)=i(1+λ)Ψ^(t)†∂_0Ψ^(t). Then we are now ready to evaluate the vacuum energy as follows E^(t)_ Vac.=∫_Ω d^3 x E^(t)_ Vac.=∫_Ω d^3 x⟨ 0| H^(t)|0⟩ = -|eB|L^2π∑_n=0^∞∑_l=1^∞ i_n√(m^2+(k'_3lℓ)^2+2n|eB|), where E_ Vac. is the vacuum energy density, i_n=1-1 2δ_n0, k'_3l≡ k_3lℓ, and Ω is the volume of the confinement area. One can derive the Casimir energy by subtracting the vacuum energy in the presence of the boundary condition from the absence of one. We note that the roles of λ do not appear in the vacuum energy for the time-like vector case. In other words, the Casimir energy also does not depend on λ. In the next subsection, we will show that the above result can be recovered in the case of the preserved Lorentz symmetry. Therefore, it is not necessary to evaluate further the Casimir energy in this subsection. §.§ Space-like vector case In this subsection, we investigate the Casimir energy for the space-like vector case in the z-direction. We start the discussion by deriving the solution for the space-like vector case with u^(z)=(0,0,0,1). In this case, the Dirac equation (<ref>) gives two equations as follows (ω-m)χ^(z)_1=(-iσ^j∂_j+eyBσ^1+iλσ^3∂_3)χ^(z)_2, (ω+m)χ^(z)_2=(-iσ^j∂_j+eyBσ^1+iλσ^3∂_3)χ^(z)_1. Multiplying both sides of Eq. (<ref>) by (ω+m) and using Eq. (<ref>), we have the equation for the upper two-component spinor χ^(z)_1 as follows (ω^2-m^2)χ^(z)_1 = (-iσ^j∂_j+eyBσ^1+iλσ^3∂_3)^2χ^(z)_1 = [-∇^2+e^2y^2B^2-eB(2iy∂_1+σ^3)+2λ∂^2_3-λ^2∂^2_3]χ^(z)_1. One can propose the solution χ^(z)_1 as follows χ^(z)_1=e^ik_1 xe^ik_3 zf^(z)(y). Along the same procedure used in the previous subsection, substituting back Eq. (<ref>) into Eq. (<ref>) brings us to Hermite's equation in which we have the eigen energies given as ω^(z)_n, k_3=√(m^2+(1-λ)^2k^2_3+2 n |eB|). We find that the solution of the Dirac field confined between two parallel plates in the space-like vector case of z-direction for the right-moving field with positive value eB is given as follows ψ^(z)_k_1,n,k_3 ( r)=e^ik_1 xe^ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [C_1 [ (ω_n, k_3+m) F^(z)_k_1, n(y); 0; (1-λ) k_3F^(z)_k_1, n(y); √(2neB) F^(z)_k_1, n-1(y) ] + C_2 [ 0; (ω_nk_3+m) F^(z)_k_1, n-1(y); √(2neB) F^(z)_k_1, n(y); -(1-λ)k_3F^(z)_k_1, n-1(y) ]],   for n≥ 1 ψ^(z)_k_1,0, k_3 ( r)= e^ik_1 xe^ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C_0 F^(z)_k_1 0(y) [ ω^(z)_0, k_3+m; 0; (1-λ) k_3; 0 ] ,  for n=0, where F^(z)_k_1, n(y)=√((eB)^1/2 2^n n!π^1/2)exp[-eB 2(y+k_1 eB)^2]H_n[√(eB)(y+k_1 eB)], with the Hermite polynomial H_n(y). In a similar way, we can obtain the solution for the left-moving field as follows ψ^(+,z)_k_1,n,-k_3( r)= e^ik_1 xe^-ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [C̃_1 [ (ω^(z)_n, k_3+m) F^(z)_k_1, n(y); 0; -(1-λ)k_3F^(z)_k_1, n(y); √(2neB) F^(z)_k_1, n-1(y) ] + C̃_2 [ 0; (ω^(z)_n, k_3+m) F^(z)_k_1, n-1(y); √(2neB) F^(z)_k_1, n(y); (1-λ)k_3F^(z)_k_1, n-1(y) ]],  for n≥ 1 ψ^(+,z)_k_1,0,-k_3 ( r)= e^ik_1 xe^-ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C̃_0 F^(z)_k_1, 0(y) [ ω^(z)_0,k_3+m; 0; -(1-λ)k_3; 0 ] ,  for n=0, where the eigen energies ω^(z)_n,k_3 are given by Eq. (<ref>) (see Appendix <ref> for the detailed derivation). The complex coefficients in the above Dirac field can be determined by similar orthonormality conditions given in Eq. (<ref>). We next write the total spatial solution for the Dirac field inside the confinement area as follows ψ^(+,z)_k_1,n,k_3( r)=ψ^(+,z)_k_1,n,k_3( r)+ψ^(+,z)_k_1,n,-k_3( r). For non-zero complex coefficients C_1, C_2,C̃_1,C̃_2, we have the constraint of the momentum k_3 as follows mℓsin(k_3ℓ)+(1-λ)k_3 ℓcos (k_3ℓ)=0, for arbitrary Landau level n. One can see that the parameter λ affects the constraint while the magnetic field does not. The allowed momentum that satisfies the constraint (<ref>) is k_3l with l=0,1,2,3,⋯. The discretized eigenenergies of the field under the MIT boundary can be written as follows ω^(z)_n,l=√(m^2+(1-λ)^2 k^2_3l+2n|eB|). Below we will compute the Casimir energy of charged Dirac field under the presence of the MIT boundary. For this purpose, we write down the Hamiltonian density for the space-like vector case as follows, H^(z)=-Ψ̅^(z)[iγ^j∂_j-eγ^μ A_μ- m]Ψ^(z)=iΨ^(z)†∂_0Ψ^(z). The vacuum energy reads E_ Vac.=-|eB| L^2π∑_n=0^∞∑_l=1^∞ i_n √(m^2+(1-λ)^2(k'_3lℓ)^2+2n|eB|), where we have used the eigenenergies given in Eq. (<ref>) and k'_3ℓ(≡ k_3lℓ). From the above vacuum energy, one can see that its value is divergent. To solve the issue, we employ the Abel-Plana-like summation as follows <cit.> ∑_l=1^∞π f_n(k'_3l)(1-sin(2k'_3l) 2k'_3l)=-π b mf_n(0) 2 (b m+1)+∫_0^∞ dz f_n(z) - i∫_0^∞ dt f_n (it)-f_n(-it)t+b m t-b me^2t+1. From the momentum constraint in the space-like vector case (<ref>), the denominator of the left-hand side Eq. (<ref>) can be rewritten in the following form 1-sin(2k'_3l) 2k'_3l = 1 +b m k'^2_3l+(bm)^2, where b=ℓ (1-λ)^-1. Then, after applying the Abel-Plana-like summation to the vacuum energy, Eq. (<ref>) becomes E_ Vac.=-|eB|L^2π^2 b∑_n=0^∞ i_n [-π b m f_n(0) 2 (b m +1)+∫_0^∞ dq f_n(q) - i∫_0^∞ dt f_n (it)-f_n(-it)t+b m t-b me^2t+1], where the function f_n(q) is defined as f_n(q)= √(m^2b^2+q^2+2n|eB| b^2)(1 +b m q^2+(bm)^2). Next, one can decompose the first and second terms in the vacuum energy (<ref>) into two parts: (i) in the absence of the boundary conditions of two plates and (ii) in the presence of one plate. The latter part is irrelevant to our discussion because it does not contribute to the force. Then, the last term of Eq. (<ref>) can be understood as the Casimir energy E_ Cas.=i |eB|L^2π^2 b∑_n=0^∞ i_n ∫_0^∞ dt f_n (it)-f_n(-it)t+b m t-b me^2t+1. Using Eq. (<ref>) and introducing variable of t=bu, the Casimir energy reads E_ Cas.= -2 |e B| L^2 /π^2 ∑_n = 0^∞ i_n ∫_0^∞ d u √(u^2 - M_n^2 )( b ( u - m ) - m / (m + u)/(u + m) e^2 b u + u - m), where [ M_n = √(m^2 + 2 n |e B|). ] The range of integration of Eq. (<ref>) can be split into two intervals, i.e., [0,M_n] and [M_n,∞]. The integration result of the first interval vanishes while the second one remains. To further proceed with the Casimir energy, we next rewrite the following quantity as b (u - m) - m / (m + u)/(u + m) e^2 b u + u - m = - 1/2d/d uln( 1 + u - m/u + m e^- 2 b u), which leads the Casimir energy to E_ Cas. = |e B| L^2 /π^2 b∑_n = 0^∞ i_n ∫_0^∞ d y √(y^2 + 2 y b M_n)d/d yln( 1 + y + b (M_n - m)/y + b (M_n + m) e^- 2 (y + b M_n) ), where we have introduced a new variable as y = b u - b M_n. Performing integration by part for Eq. (<ref>), we finally find the simpler form of the Casimir energy as follows E_ Cas.=-|eB| L^2π^2 b∑^∞_n=0 i_n ∫^∞_0 dy (y+bM_n)(y^2+2byM_n)^-1/2ln( 1 + y + b (M_n - m)/y + b (M_n + m) e^- 2 (y + b M_n) ). We next numerically evaluate the expression of the Casimir energy given in Eq. (<ref>). The left panel of Fig. <ref> depicts the scaled Casimir energy as a function of the dimensionless parameter m'(≡ mℓ) for various values of the parameter λ=0,0.01,0.1 with a fixed parameter ℓ^2|eB|=2. From this figure, we find that the scaled Casimir energy converges to zero as the parameter m' becomes larger. The right panel of figure <ref> depicts the scaled Casimir energy as a function of the dimensionless parameter ℓ^2|eB| for a fixed parameter m'=1. From this figure, one can see that the scaled Casimir energy also converges to zero as the parameter ℓ^2 |eB| increases. Both panels of Fig. <ref> show that the parameter λ increases, the Casimir energy will increase and vice versa, as previously shown by Ref. <cit.> for the absence of the magnetic field. Figure <ref> plots the scaled Casimir energy as a function of the dimensionless parameter ℓ^2 |eB| for various values of parameter λ=0,0.01,0.1 with a fixed parameter m'=1. One can see that the increasing ℓ^2 |e B| leads to the converging of the Casimir energy to zero. In the rest of this part, we investigate the approximate cases of the Casimir energy. In the case of the weak magnetic field B→ 0, the above Casimir energy (<ref>) for an arbitrary m'(≡ mℓ) reduces to E_ Cas.≃ -L^2π^2 b^3∫^∞_bm dx x^2 ∫^∞_0 dv (v+1)1√(v(v+2))ln( 1+x(v+1)-bm x(v+1)+bm e^-2x(v+1)). To obtain the above expression, we have used the replacement of summation with integration, v=y/(b M_n), and x=bM_n. Taking the case of light mass m'≪1 for Eq. (<ref>), we recover the earlier result by Ref. <cit.> as follows E_ Cas.≃-7π^2 (1-λ)^3 L^2 2880 ℓ^3[1-120 m' 7π^2(1-λ)], where we have expanded the integrand up to the order of 𝒪(m') and omitted the higher ones. The first term corresponds to the Casimir energy in the massless case with the effect of the Lorentz violation while the second term corresponds to the correction part. In the case of the preserved Lorentz symmetry, λ=0, we recover the well-known Casimir energy of the massless fermion derived by Johnson <cit.>. To obtain the approximated result of Eq. (<ref>), one can also start from the general Casimir energy (<ref>) and take its light mass case m'≪ 1 for the arbitrary magnetic field as E_ Cas.≃ -|eB|L^2π^2b∑_n=0^∞ i_n∫^∞ _0dy [(y+b√(2 n e B))ln(1+e^-2(y+b√(2 n e B)))√(y^2+2y b√(2n e B))-2b me^-2(y+b√(2 n e B))√(y^2+2y b√(2n e B))(1+e^-2(y+b√(2 n e B)))]. Then, taking the limit of the weak magnetic field, the above expression reduces to Eq. (<ref>). In the case of heavy mass m'≫ 1, we find that the Casimir energy approximately reduces to E_ Cas.≃ - |e B|L^2(1-λ)^3/2 16 π^3/2ℓ√(m')∑_n=0^ ∞ i_ne^-2√( m'^2+2 n B') (1-λ), where we have expanded the integrand of Eq. (<ref>) up to the order of 𝒪(1/m') and omitted the higher ones. In the case of weak magnetic field B→ 0, the above Casimir energy (<ref>) reads E_ Cas.≃ - L^2 (1 - λ)^5 / 2√(m')/32 π^3 / 2ℓ^3 e^- 2 m'/(1 - λ). We can see that, in the case of heavy mass, the Casimir energy goes to zero as the increase of mass. We next investigate the Casimir energy in the case of the strong magnetic field ℓ^2 eB≫ 1. In this case, together with light mass m'≪ 1, the Casimir energy in Eq. (<ref>) approximately reduces to E_ Cas.≃ -|eB|L^2 (1-λ) 48 ℓ. Meanwhile for the case of strong magnetic field ℓ^2 |eB|≫ 1 and taking the limit of heavy mass m'≫ 1, the Casimir energy reads E_ Cas.≃-|eB|L^2 (1-λ)^3/2 32 π^3/2ℓ√(m')e^-2m' (1-λ). From the above expression, we note that the Casimir energy converges to zero as the increase of parameter m'. § CASIMIR PRESSURE In this section, we investigate the Casimir pressure for the spacelike vector case. It can be obtained from the Casimir energy (<ref>) by taking the derivative with respect to the plate's distance as P_Cas . = -1 L^2∂ E_ Cas.∂ℓ = - ∑_n = 0^∞ i_n ∫_0^∞ d y 1/(1 - λ) b^2 π^2 (y (2 b M_n + y))^3 / 2                     × e B y {2 b (b M_n + y) (2 b M_n + y) (b^2 M_n (M^2_n - m^2) + 2 b M^2_n y + y (m + M_n y))/b^2 (M^2_n - m^2) + 2 b M_n y + y^2 + e^2 (b M_n + y) (b (m + M_n) + y)^2.                                  . + (b^2 M^2_n + 3 b M_n y + y^2) ln( 1 + e^- 2 (b M_n + y) (b (- m + M_n) + y)/b (m + M_n) + y) }. We plot the behavior of the scaled Casimir pressure in Figs. <ref> and <ref>. In general, we can see that its behavior is similar to that of the Casimir energy. From the left panel of Fig. <ref>, one can see the scaled Casimir pressure converges to zero as the increases of parameter m' while from the right panel, it increases as the increases of ℓ^2 |eB|. These behaviors are supported by Fig. <ref>. Both panels of Fig. <ref> show that the Casimir pressure increases as the increases of parameter λ. We next investigate the Casimir pressure in the case of weak and strong magnetic fields. In the case of weak magnetic field B→ 0, the Casimir pressure (<ref>) approximately reduces to P_ Cas. ≃ - 1/(1 - λ) b^4 π^2∫_b m^∞ d x ∫_0^∞ d v x^2/v^1 / 2 (2 + v)^3 / 2 ×( 2 x (1 + v) (2 + v) (x^2 (1 + v)^2 + t b m - (b m)^2)/x^2 (1 +v)^2 - (b m)^2 + e^2 x (1 + v) (b m + x (1 + v))^2 + (1 + 3 v + v^2) ln( 1 + e^- 2 x (1 + v) (- b m + x (1 + v))/(b m + x (1 + v))) ). We further take light mass limit m'≪ 1 for the above expression, then we have P_ Cas.≃ -(1-λ)^2(7π^2 (1-λ)-80m') 960 ℓ^4, which covers the earlier result of Ref. <cit.>. As discussed in the previous section, to obtain the above expression, we can use the reverse way, namely, taking its light mass limit and then considering the weak magnetic field. The Casimir pressure for the case of light mass with the arbitrary magnetic field is approximately given as follows P_ Cas.≃ P^(0)_ Cas.+P^(1)_ Cas., where P^(0)_ Cas. is the Casimir pressure for the massless case explicitly given as P^(0)_ Cas. = - ∑_n = 0^∞ i_n ∫_0^∞ d y |e B| y/b^2 π^2 (1 - λ) ( y ( 2 b √(2 n e B) + y ) )^3 / 2 ×{2 b √(2 n e B)( 2 b √(2 n e B) + y ) ( b √(2 n e B) + y ) /( 1 + e^2 ( b √(2 n e B) + y )) + ( b^2 2 n e B + 3 b √(2 n e B) y + y^2 ) ln( 1 + e^- 2 ( b √(2 n e B) + y )) }, and P^(1)_ Cas. is the first order correction to the Casimir pressure 𝒪(m^') explicitly given as P^(1)_ Cas. = ∑_n = 0^∞ i_n ∫_0^∞ d y 2 |e B| y b √(2 n e B)( 1 + e^2 ( b √(2 n e B) + y ) (1 + 2 y) + 4 e^2 ( b √(2 n e B) + y ) b √(2 n e B)) b m/b^2 π^2 ( 1 + e^2 ( b √(2 n e B) + y ))^2 ( y ( y + 2 b √(2 n e B)) )^3 / 2 (1 - λ) . We next investigate the Casimir pressure (<ref>) in the case of heavy mass m'≫ 1. In this case, we have P_ Cas.≃ - |e B| √(m')/(1 - λ)^1 / 2 8 π^3 / 2 b^2∑_n = 0^∞ i_n e^- 2 √(m'^2 + 2 n e B), and with the limit of the weak magnetic field B→ 0, the above Casimir pressure approximately reduces to P_ Cas.≃ - (1 - λ)^5 / 2m'^3 / 2/16 π^3 / 2ℓ^4 e^- 2 m'/(1 - λ). Similar behavior to the Casimir energy (<ref>), one can see that the Casimir pressure in the limit of heavy mass (<ref>) converges to zero as increasing of the particle's mass. Based on the result of the Casimir pressure in the cases of light (<ref>) and heavy masses (<ref>), we will analyze the behavior in the strong magnetic field. Taking the limit of strong magnetic field ℓ^2 |eB|≫ 1 for Eq. (<ref>), the Casimir pressure approximately reduces to P_ Cas.≃ -|eB|L^2 (1-λ) 48 ℓ^2, while for Eq. (<ref>), we obtain P_ Cas.≃ -|eB|L^2 (1-λ)^3/2√(m') 16 π^3/2ℓ^2 e^-2m' (1-λ). One can also derive both above equations by taking the derivative of the Casimir energy Eqs. (<ref>) and (<ref>) with respect to the plate's distance. § SUMMARY We have studied the Casimir effect of a Lorentz-violating Dirac with a background uniform magnetic field. The Lorentz violation is described by two parameters: (i) λ , which determines the intensity of the violation and (ii) vector u^μ, which determines the direction of the violation. In the present study, we investigated two vector cases, namely, timelike and spacelike vector cases. For the spacelike vector case, we only discussed the z-direction. The purpose of the study is to find the effect of the Lorentz violation parameter λ together with the presence of the magnetic field in the behavior of the Casimir energy as well as its pressure. We used the boundary condition from the MIT bag model <cit.> to represent the property of the plates. From our derivation, we find that for the timelike vector case, the magnetic field and the Lorentz violating parameter do not affect the structure of the momentum constraint while for the spacelike vector case, only Lorentz violating parameter appears. We noted that the vacuum energy under the MIT boundary condition is divergent. Using Abel-Plana like summation <cit.>, we can extract this vacuum energy into three main parts, namely, vacuum energy in the absence of the boundary condition, the vacuum energy in the present of single boundary condition that does not relevant to the Casimir effect, and the rest term that refers to the Casimir energy. We can derive the Casimir energy by subtracting the vacuum energy in the presence of the boundary condition from that in the absence of one. The Lorentz violation for the timelike vector case does not affect the structure of the Casimir energy as well as its pressure while for the spacelike vector case, the violation affects it. We also found that the magnetic field has an effect on the Casimir energy and the pressure for both timelike and spacelike vector cases. We have demonstrated the behavior of the scaled Casimir energy and the pressure as a function of mass, parameter λ, and magnetic field. For the fixed parameter λ and magnetic field, the scaled Casimir energy and the pressure converge to zero as the increase of mass (see left panel of Figs. <ref> and <ref>). For fixed parameter λ and mass, the scaled Casimir energy and the pressure converge to zero as the increasing of the magnetic field (see right panel of Figs. <ref> and <ref>). We also found that the increase of the parameter λ leads to the increase of the Casimir energy and the pressure, as has been pointed out by Ref. <cit.>. For future work, it is interesting to discuss the thermal effect in a similar setup to our present work (c.f., Ref. <cit.> for the scalar field). It is also interesting to study a similar setup under the general boundary, for example, chiral MIT boundary conditions <cit.>. § ACKNOWLEDGMENTS A. R. was supported by the National Research and Innovation Agency (BRIN) Indonesia, through the Post-Doctoral Program. § DETAIL DERIVATION OF CONSTRAINT FOR MOMENTUM In this section, we provide the complementary derivation for the momentum constraint. Applying the boundary condition from the MIT bag model (<ref>) to the solution of the Dirac equation, we have two equations as follows iσ^3χ_2|_z=0-χ_1|_z=0=0, iσ^3χ_2|_z=ℓ+χ_1|_z=ℓ=0, where we have used n^(0)_μ=(0,0,0,1) and n^(ℓ)_μ=(0,0,0,-1) at the first z=0 and second plates z=ℓ, respectively. Then, in a more explicit expression, we have four equations boundary conditions as follows iχ_21|_z=0-χ_11|_z=0=0, iχ_22|_z=0+χ_12|_z=0=0, iχ_21|_z=ℓ+χ_11|_z=ℓ=0, iχ_22|_z=ℓ-χ_12|_z=ℓ=0, where we have decomposed the two-component spinors χ_1 and χ_2 as χ_1= [ χ_11; χ_12 ], χ_2= [ χ_21; χ_22 ]. The boundary conditions of Eqs. (<ref>)-(<ref>) can be simultaneously written in the form of multiplication between two matrices as follows [ P_11 P_12; P_21 P_22 ][ C_0; C̃_0 ] =0,   for n=0, and [ Q_11 Q_12 Q_13 Q_14; Q_21 Q_22 Q_23 Q_24; Q_31 Q_32 Q_33 Q_34; Q_41 Q_42 Q_43 Q_44 ][ C_1; C_2; C̃_1; C̃_2 ] =0,   for n≥ 1, where the matrix elements are given by P^(t)_11=ik_3-((1+λ)ω^(t)_0k_3+m), P^(t)_12=-ik_3-((1+λ)ω^(t)_0k_3+m), P^(t)_21=[ik_3+((1+λ)ω^(t)_0k_3+m)]e^ik_3ℓ, P^(t)_22=[-ik_3+((1+λ)ω^(t)_0k_3+m)]e^-ik_3ℓ, Q^(t)_11=- Q^(t)_22=ik_3-((1+λ)ω^(t)_nk_3+m), Q^(t)_12= Q^(t)_14= Q^(t)_21= Q^(t)_23=i√(2neB), Q^(t)_13=- Q^(t)_24=-ik_3-((1+λ)ω^(t)_nk_3+m), Q^(t)_31=- Q^(t)_42=[ik_3+((1+λ)ω^(t)_nk_3+m)]e^ik_3ℓ, Q^(t)_32= Q^(t)_41=i√(2neB)e^ik_3ℓ, Q^(t)_34= Q^(z)_43=i√(2neB)e^-ik_3ℓ, Q^(t)_33=- Q^(t)_44=[-ik_3+((1+λ)ω^(t)_nk_3+m)]e^-ik_3ℓ. and P^(z)_11=i(1-λ)k_3-(ω^(z)_0k_3+m), P^(z)_12=-i(1-λ)k_3-(ω^(z)_0k_3+m), P^(z)_21=[i(1-λ)k_3+(ω^(z)_0k_3+m)]e^ik_3ℓ, P^(z)_22=[-i(1-λ)k_3+(ω^(z)_0k_3+m)]e^-ik_3ℓ, Q^(z)_11=- Q^(z)_22=i(1-λ)k_3-(ω^(z)_nk_3+m), Q^(z)_12= Q^(z)_14= Q^(z)_21= Q^(z)_23=i√(2neB), Q^(z)_13=- Q^(z)_24=-i(1-λ)k_3-(ω^(z)_nk_3+m), Q^(z)_31=- Q^(z)_42=[i(1-λ)k_3+(ω^(z)_nk_3+m)]e^ik_3ℓ, Q^(z)_32= Q^(z)_41=i√(2neB)e^ik_3ℓ, Q^(z)_34= Q^(z)_43=i√(2neB)e^-ik_3ℓ, Q^(z)_33=- Q^(z)_44=[-i(1-λ)k_3+(ω^(z)_nk_3+m)]e^-ik_3ℓ, for timelike and spacelike in the z-direction vector cases, respectively. For arbitrary non-zero complex coefficients C_0,C̃_0, C_1, C_2,C̃_1,C̃_2 requires the vanishing of the determinant of 2× 2 matrix of Eq. (<ref>) and 4× 4 matrices of Eq. (<ref>) that lead the constraint for momentum k_3. § NEGATIVE-ENERGY SOLUTIONS §.§ Timelike vector case The negative energy solution for the right-moving field component is as follows ψ^(-,t)_k_1,n,k_3 ( r) = e^-ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n, k_3+m)) ×[C̃_1 [ k_3f^(t)_-k_1 n(y); -√(2neB) f^(t)_-k_1 n-1(y); ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n(y); 0 ] + C̃_2 [ -√(2neB) f^(t)_-k_1 n(y); -k_3f^(t)_-k_1 n-1(y); 0; ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n-1(y) ]],   for n≥ 1 and ψ^(-,t)_k_1,0,k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C̃_0 f^(t)_-k_1, 0(y) [ k_3; 0; (1+λ)ω^(t)_0,k_3+m; 0 ],   for n=0. The negative energy solution for the left-moving field component is as follows ψ^(-,t)_k_1,n,-k_3 ( r) = e^-ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n, k_3+m)) ×[ C_1 [ -k_3f^(t)_-k_1 n(y); -√(2neB) f^(t)_-k_1 n-1(y); ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n(y); 0 ] + C_2 [ -√(2neB) f^(t)_-k_1 n(y); k_3f^(t)_-k_1 n-1(y); 0; ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n-1(y) ]],   for n≥ 1 and ψ^(-,t)_k_1,0,-k_3 ( r)= e^-ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C_0 f^(t)_-k_1, 0(y) [ -k_3; 0; (1+λ)ω^(t)_0,k_3+m; 0 ],   for n=0. The total spatial solution inside the confinement area is given by the linear combination between the left- and right-moving field components as follows ψ^(-,t)_k_1,n, l( r)=ψ^(-,t)_k_1,n,k_3 l( r)+ψ^(-,t)_k_1,n,-k_3 l( r), where we use k_3 l to represent the allowed momentum in the system. §.§ Spacelike vector case (z-direction) The negative energy solutions for the right-moving field component are given as follows ψ^(-,z)_k_1,n,k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [C̃_1 [ (1-λ) k_3F^(z)_-k_1, n(y); -√(2neB) F^(z)_-k_1, n-1(y); (ω^(z)_n,k_3+m) F^(z)_-k_1, n(y); 0 ] + C̃_2 [ -√(2neB) F^(z)_-k_1, n(y); -(1-λ)k_3F^(z)_-k_1, n-1(y); 0; (ω^(z)_nk_3+m) F^(z)_-k_1, n-1(y) ]],   for n≥ 1 and ψ^(-,z)_k_1,0,k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C̃_0 F^(z)_-k_1, 0(y) [ (1-λ)k_3; 0; ω^(z)_0, k_3+m; 0 ] ,   for n=0, where f^(t)_-k_1, n(y)=√((eB)^1/2 2^n n!π^1/2)exp[-eB 2(y-k_1 eB)^2]H_n[√(eB)(y-k_1 eB)]. The negative energy solutions for the left-moving field component are given as follows ψ^(-,z)_k_1,n,-k_3 ( r)= e^-ik_1 xe^ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [ C_1 [ -(1-λ) k_3F^(z)_-k_1, n(y); -√(2neB) F^(z)_-k_1, n-1(y); (ω^(z)_n,k_3+m) F^(z)_-k_1, n(y); 0 ] + C_2 [ -√(2neB) F^(z)_-k_1, n(y); (1-λ)k_3F^(z)_-k_1, n-1(y); 0; (ω^(z)_nk_3+m) F^(z)_-k_1, n-1(y) ]],   for n≥ 1 and ψ^(-,z)_k_1,0,-k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C_0 F^(z)_-k_1 0(y) [ -(1-λ)k_3; 0; ω^(z)_0, k_3+m; 0 ] ,   for n=0. The total spatial solution inside the confinement area is given by the linear combination between the left- and right-moving field components as follows ψ^(-,z)_k_1,n, l( r)=ψ^(-,z)_k_1,n,k_3 l( r)+ψ^(-,z)_k_1,n,-k_3 l( r), where we use k_3 l to represent the allowed momentum in the system. 99 Casimir1948 H. B. G. Casimir, Kon. Ned. Akad. Wetensch. Proc. 51, 793 (1948). Sparnaay1958 M. J. Sparnaay, Physica 24, 751 (1958). Lamoreaux97 S. K. Lamoreaux, Phys. Rev. Lett. 78, 5 (1997), Phys. Rev. Lett. 81, 5475 (1998) (E). Mohideen:1998iz U. Mohideen and A. Roy, Phys. Rev. Lett. 81, 4549 (1998). Roy:1999dx A. Roy, C. Y. Lin and U. Mohideen, Phys. Rev. D 60, 111101 (1999). Bressi:2002fr G. Bressi, G. Carugno, R. Onofrio and G. Ruoso, Phys. Rev. Lett. 88, 041804 (2002). Belluci2009 S. Bellucci and A. A. Saharian Phys. Rev. D 79, 085019 (2009). Hassan:2022hcb Z. Hassan, S. Ghosh, P. K. Sahoo and K. Bamba, Eur. Phys. J. C 82, 1116 (2022). Grushin2021 A. G. Grushin and A. Cortijo, Phys. Rev. Lett. 106, 020403 (2021). Grushin2011 A. G. Grushin, P. Rodriguez-Lopez, and A. Cortijo, Phys. Rev. B 84, 045119 (2011). Onofrio:2006mq R. Onofrio, New J. Phys. 8, 237 (2006). Bordag:2001qi M. Bordag, U. Mohideen and V. M. Mostepanenko, Phys. Rept. 353, 1-205 (2001). Ambjorn1983 J. Ambjorn and S. Wolfram, Annals Phys. 147, 1 (1983). Chodos:1974je A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn and V. F. Weisskopf, Phys. Rev. D 9, 3471 (1974). Chodos:1974pn A. Chodos, R. L. Jaffe, K. Johnson and C. B. Thorn, Phys. Rev. D 10, 2599 (1974). Johnson:1975zp K. Johnson, Acta Phys. Polon. B 6, 865 (1975). Rohim:2022mri A. Rohim, A. S. Adam and K. Yamamoto, Prog. Theor. Exp. Phys. 2023, 013B05 (2023). Lutken:1983hm C. A. Lutken and F. Ravndal, J. Phys. G 10, 123 (1984). Sitenko:2014kza Y. A. Sitenko, Phys. Rev. D 91, 085012 (2015). Cougo-Pinto:1998jwo M. V. Cougo-Pinto, C. Farina and A. C. Tort, Conf. Proc. C 9809142, 235 (1999). Ostrowski:2005rm M. Ostrowski, Acta Phys. Polon. B 37, 1753 (2006). Elizalde:2002kb E. Elizalde, F. C. Santos and A. C. Tort, J. Phys. A 35, 7403 (2002). Cougo-Pinto:1998jun M. V. Cougo-Pinto, C. Farina, M. R. Negrao and A. C. Tort, J. Phys. A 32, 4457 (1999). Frank:2006ww M. Frank and I. Turan, Phys. Rev. D 74, 033016 (2006). Erdas:2013jga A. Erdas and K. P. Seltzer, Phys. Rev. D 88, 105007 (2013). Martin-Ruiz:2016ijc A. Martín-Ruiz and C. A. Escobar, Phys. Rev. D 94, 076010 (2016). Cruz:2017kfo M. B. Cruz, E. R. Bezerra de Mello and A. Yu. Petrov, Phys. Rev. D 96, 045019 (2017). Erdas:2020ilo A. Erdas, Int. J. Mod. Phys. A 35, 2050209 (2020). Escobar-Ruiz:2021dxi A. M. Escobar-Ruiz, A. Martín-Ruiz, E. C. A. and R. Linares, Int. J. Mod. Phys. A 36, 2150168 (2021). Blasone:2018nfy M. Blasone, G. Lambiase, L. Petruzziello and A. Stabile, Eur. Phys. J. C 78, no.11, 976 (2018). Escobar:2020pes C. A. Escobar, L. Medel and A. Martín-Ruiz, Phys. Rev. D 101, 095011 (2020). Cruz:2018thz M. B. Cruz, E. R. Bezerra de Mello and A. Y. Petrov, Phys. Rev. D 99, 085012 (2019). Kostelecky:1988zi V. A. Kostelecky and S. Samuel, Phys. Rev. D 39, 683 (1989). Colladay:1996iz D. Colladay and V. A. Kostelecky, Phys. Rev. D 55, 6760 (1997). Colladay:1998fq D. Colladay and V. A. Kostelecky, Phys. Rev. D 58, 116002 (1998). Kostelecky:2003fs V. A. Kostelecky, Phys. Rev. D 69, 105009 (2004). Kostelecky:1994rn V. A. Kostelecky and R. Potting, Phys. Rev. D 51, 3923-3935 (1995). Colladay:1994cj D. Colladay and V. A. Kostelecky, Phys. Lett. B 344, 259 (1995). Colladay:1995qb D. Colladay and V. A. Kostelecky, Phys. Rev. D 52, 6224 (1995). Schwingenheuer1995 B. Schwingenheuer et al. Phys. Rev. Lett. 74, 4376 (1995). Gibbons1997 L. K. Gibbons et al. Phys. Rev. D 55, 6625 (1997). NA31:1990xkc R. Carosi et al. Phys. Lett. B 237, 303 (1990). Kostelecky:1997mh V. A. Kostelecky, Phys. Rev. Lett. 80, 1818 (1998). Schwinberg1981 P.B. Schwinberg, R.S. Van Dyck, H.G. Dehmelt, Physics Letters A 81, 2 (1981). VanDyck1986 R. S. Van Dyck, Jr., P. B. Schwinberg, and H. G. Dehmelt, Phys. Rev. D 34, 722 (1986). Brown1986 L. S. Brown and G. Gabrielse Rev. Mod. Phys. 58, 233 (1986). VanDyck1987 R. S. Van Dyck, Jr., P. B. Schwinberg, and H. G. Dehmelt Phys. Rev. Lett. 59, 26 (1987) Bluhm:1997ci R. Bluhm, V. A. Kostelecky and N. Russell, Phys. Rev. Lett. 79, 1432 (1997). Bluhm:1997qb R. Bluhm, V. A. Kostelecky and N. Russell, Phys. Rev. D 57, 3932 (1998). Bertolami:1996cq O. Bertolami, D. Colladay, V. A. Kostelecky and R. Potting, Phys. Lett. B 395, 178 (1997). Romeo:2000wt A. Romeo and A. A. Saharian, J. Phys. A 35, 1297 (2002). Bhattacharya:2007vz K. Bhattacharya, arXiv:0705.4275. Bhattacharya:1999bm K. Bhattacharya and P. B. Pal, arXiv:hep-ph/9911498. AFG P. Alberto, C. Fiolhais, and V. M. S. Gil, Eur. J. Phys. 17, 19 (1996). Bellucci:2009hh S. Bellucci and A. A. Saharian, Phys. Rev. D 80, 105003 (2009). Erdas:2021xvv A. Erdas, Int. J. Mod. Phys. A 36, 2150155 (2021).
http://arxiv.org/abs/2307.04358v1
20230710060523
False Sense of Security: Leveraging XAI to Analyze the Reasoning and True Performance of Context-less DGA Classifiers
[ "Arthur Drichel", "Ulrike Meyer" ]
cs.CR
[ "cs.CR", "cs.LG" ]
Leveraging XAI to Analyze the Reasoning and True Performance of DGA Classifiers]False Sense of Security: Leveraging XAI to Analyze the Reasoning and True Performance of Context-less DGA Classifiers [email protected] RWTH Aachen University [email protected] RWTH Aachen University The problem of revealing botnet activity through Domain Generation Algorithm (DGA) detection seems to be solved, considering that available deep learning classifiers achieve accuracies of over 99.9%. However, these classifiers provide a false sense of security as they are heavily biased and allow for trivial detection bypass. In this work, we leverage explainable artificial intelligence (XAI) methods to analyze the reasoning of deep learning classifiers and to systematically reveal such biases. We show that eliminating these biases from DGA classifiers considerably deteriorates their performance. Nevertheless we are able to design a context-aware detection system that is free of the identified biases and maintains the detection rate of state-of-the art deep learning classifiers. In this context, we propose a visual analysis system that helps to better understand a classifier's reasoning, thereby increasing trust in and transparency of detection methods and facilitating decision-making. <ccs2012> <concept> <concept_id>10002978.10002997.10002999</concept_id> <concept_desc>Security and privacy Intrusion detection systems</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [300]Security and privacy Intrusion detection systems [300]Computing methodologies Machine learning [ Ulrike Meyer August 12, 2023 =================== Copyright held by the owner/author(s) 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive version was published in The 26th International Symposium on Research in Attacks, Intrusions and Defenses (RAID ’23), https://doi.org/10.1145/3607199.3607231 § INTRODUCTION In recent years, deep learning has been increasingly used as a building block for security systems incorporating classifiers that achieve high accuracies in various classification tasks. The advantage of deep learning classifiers is that they often outperform classical machine learning approaches, can be trained in an end-to-end fashion, and automatically learn to extract relevant features for classification. Therefore, less effort is often expended in creating such classifiers, since they seem to achieve high accuracies out-of-the-box and do not require the integration of domain knowledge as would be required to create feature-based or rule-based classifiers. This black-box nature of deep learning classifiers is particularly dangerous in the security domain, as the classifiers operate in an adversarial environment where an attacker actively aims to avoid detection. Since it is unclear what a classifier has learned, not only is its operation opaque, leading to trust issues, but it is also unclear whether the training data might have influenced a classifier in a way that an attacker could easily bypass the classification. Related work <cit.> has identified and summarized common pitfalls when using machine learning in computer security, including pitfalls that make it easier for an attacker to evade detection. These pitfalls range from sampling bias, where the data used does not adequately represent the true data distribution, over inaccurate ground-truth labels, to incorporating spurious correlations, where artifacts unrelated to the classification problem provide shortcuts for distinguishing classes. To uncover potential classification biases introduced by these pitfalls, related work suggests using explainability techniques for machine learning. However, it remains unclear which strategy is appropriate to mitigate identified problems. In this work, we systematically apply explainability techniques to the use-case of Domain Generation Algorithm (DGA) detection to reveal a variety of biases in state-of-the-art deep learning classifiers. We then evaluate the loss in classification performance induced by the elimination of these biases from the classifiers and propose a classification system that is free of the identified biases. We focus on DGA detection because for this use-case a plethora of research exists, the state-of-the-art classifiers that achieve accuracies up to 99.9% are open source, and domains generated by different DGAs are publicly available in bulk through open source intelligence (OSINT) feeds such as DGArchive <cit.>. This allows us to replicate the results of related work before performing a critical analysis of automatic feature extraction. To this end, we first conduct an extensive evaluation of a variety of different explainability techniques including recent developments. Then, we demonstrate how these methods can be used to debug and improve the understanding of state-of-the-art classifiers. In this context, we identify features and classification biases and show how this knowledge can be exploited to evade detection with ease. To address these issues, we propose a classification system free of the identified biases combined with a visualization system that supports analysts in Security Operation Centers (SOCs), increases transparency and confidence in detection methods, and facilitates decision-making. Finally, as a secondary contribution, we use the knowledge gained from our study to improve the state-of-the-art deep learning as well as feature-based approaches for DGA multiclass classification in terms of classification performance and efficiency. Overall, we thus provide a systematic approach to expose biases and analyze the reasoning of deep learning classifiers for DGA detection. While some of these biases may seem obvious and easily avoidable, they are present even in DGA detection approaches proposed at leading security conferences (e.g., <cit.>). Moreover, these biases are rooted on subtle flaws that are rife in security research and affect many other use-cases as well <cit.>. Thus, with this work we aim to raise awareness of potential pitfalls in state-of-the-art classifiers that allow bypassing detection, and provide helpful guidance in conducting a similar analysis also for different use-cases. While features and biases are highly domain specific, the generation of explanations is completely independent of the underlying classification task. Hence, the fundamental idea of leveraging XAI to improve machine learning classifiers is applicable to a variety of different use-cases (e.g., phishing detection, malware detection, vulnerability discovery, or general network intrusion detection). § PRELIMINARIES The self-learned features of a deep learning classifier and thus potential biases in its classification decision are mostly use-case dependent. It is thus fundamental to understand the specifics of the classification task at hand, including the data used by state-of-the-art classifiers and the data preprocessing applied. §.§ Domain Generation Algorithm Detection Domain Generation Algorithms (DGAs) are used by malware infected devices to contact the botnet master's command and control (C2) server for updates or instructions (e.g., the target IP for a distributed denial-of-service (DDoS) attack). DGAs are pseudo-random algorithms which generate a large amount of domain names that the bots query one by one. The advantage of this approach over using fixed IP addresses or fixed domain names is that it creates an asymmetric situation where the botnet master only needs to register one domain, but the defenders have to block all generated domains. The botnet master knows the seed and the generation scheme and can thus register a DGA-generated domain in advance. When the bots query this domain, they get the valid C2 server's address, while all other queries result in non-existent domain (NXD) responses. §.§ State-of-the-Art Classifiers To combat DGAs, binary detection approaches have been proposed in the past, capable of distinguishing benign domains from DGA-generated domains with high probability and low false-positive rates (e.g., <cit.>). Going a step further, multiclass classifiers have been proposed that can not only separate benign domains from DGA-generated domains, but are also able to associate malicious domains with the DGA that generated them, allowing for the identification and targeted remediation of malware families (e.g., <cit.>). In general these approaches can be divided into two groups: context-less (e.g., <cit.>) and context-aware (e.g., <cit.>) approaches. Context-less approaches work exclusively with information that can be extracted from a single domain name, while context-aware approaches use additional information, such as statistical data from the monitored network, to further improve detection performance. Previous studies (e.g., <cit.>) have shown that context-less approaches achieve similar or even higher performance while requiring less resources and being less intrusive than context aware approaches. Furthermore, the machine learning classifiers can additionally be divided into feature-based classifiers such as support vector machines (SVMs) or random forests (RFs) (e.g., <cit.>), and feature-less (deep learning-based) classifiers such as recurrent (RNNs), convolutional (CNNs), or residual neural networks (ResNets) (e.g., <cit.>). Previous studies (e.g., <cit.>) have shown that feature-less approaches achieve superior classification performance. The currently best deep learning-based classifier for binary and multiclass classification is ResNet <cit.>. Hence, we analyze the reasoning of this particular classifier in detail. In addition, we use the insights gained from our analysis to identify missing features in EXPLAIN <cit.>, currently the most powerful feature-based multiclass classifier, and seek to bring its classification performance up to the state-of-the-art level. In the following, we briefly introduce both classifier types. Detailed information on the implementations of each classifier can be found in <cit.>. §.§.§ ResNet Drichel et al. <cit.> proposed ResNet-based models for DGA binary and multiclass classification. The classifiers are constructed from residual blocks containing skip connections between convolutional layers to counteract the vanishing gradient problem. B-ResNet, the proposed binary classifier, uses only one residual block with 128 filters per convolutional layer while M-ResNet, the multiclass classifier, is more complex and composed of eleven residual blocks with 256 filters. §.§.§ EXPLAIN The authors of EXPLAIN <cit.> proposed several variants of their feature-based and context-less DGA multiclass classifier. The best performing model is a one-vs.-rest variant of a RF that extracts 76 features for each domain name to be classified, which can be categorized into 51 linguistic, 19 statistical and 6 structural features. §.§ Data To train machine learning classifiers for DGA classification, domain names labeled with the DGA that generated them are widely available in OSINT feeds such as DGArchive <cit.>. Benign training data can either be obtained by monitoring real networks or generated artificially based on public top sites rankings such as Tranco <cit.>. The problem with artificial data is that it may not accurately reflect real network traffic and thus may introduce bias and lead to misleading results. Further, the domain names included in public top sites rankings are on the resolving side of the DNS traffic because they are registered. Since most DGA-generated domains are not registered, additional bias may be introduced when they are paired with registered benign domain names for training. Due to these reasons, several approaches (e.g., <cit.>) focus on the classification of non-resolving DNS traffic (NX-traffic). Moreover, the focus on NX-traffic offers a number of other advantages: First, NX-traffic is easier to monitor because its volume is an order of magnitude smaller than the volume of full DNS traffic. Monitoring NX-traffic still allows us to detect malware-infected machines before they are instructed to participate in malicious actions, as DGAs can usually be detected in NX-traffic long before they resolve a registered domain for their C2 server. Second, NXDs are less privacy-sensitive compared to resolving domain names, as they generally do not contain user-generated domains, with the exception of typo domains. Although, NXDs may still contain sensitive information about an organization as a whole, the classification of NX-traffic seems better suited to a Classification-as-a-Service (CaaS) setting. Finally, it has been shown that classifiers trained on NX-traffic are more robust against certain adversarial attacks compared to classifiers trained on resolving traffic <cit.>. In this work, we follow the suggestions of related works and focus on the classification of NX-traffic. In the following, we briefly describe our data sources. §.§.§ DGArchive We use the OSINT feed of DGArchive <cit.> to obtain DGA-labeled domains. At the time of writing the feed contains approximately 123 million unique samples generated by 106 different DGAs. §.§.§ University Network We extract benign-labeled domain names from traffic recordings of the central DNS resolver of the campus network of RWTH Aachen University. This network includes several academic and administrative networks, dormitory networks, and the network of the affiliated university hospital. We selected a one-month recording of NXDs from mid-October 2017 until mid-November 2017 containing approximately 35 million unique NXDs for our evaluation. We deliberately chose an older NX-traffic recording because in our study we also want to evaluate whether a classifier learns time-dependent artifacts of a specific network or whether it generalizes well to new environments and is time-robust. We filter all NXDs from this data source using DGArchive to remove potentially malicious domains. Although the data may still contain mislabeled samples, the only way to avoid this problem is to use artificial data which may not accurately reflect real network traffic and thus may introduce additional bias. §.§.§ Company Network A second source for benign-labeled data are recordings of several central DNS resolvers of Siemens AG. Data obtained from this source is very diverse as the DNS resolvers cover the regions of Asia, Europe, and the USA. From the company, we obtain a one-month recording of benign NXDs from April 2019 containing approximately 311 million unfiltered NXDs. Benign data from this source is only used for the final real-world evaluation study, which is free of experimental biases, to assess whether a classifier contains any biases with respect to the network data on which it was trained and whether a classifier is time-robust. We again filter all NXDs from this data source using DGArchive to clean the data as much as possible. §.§.§ Ethical Considerations Our institution does not yet have an ethics review board that could have approved this study. However, we ensured that we do not record or use any personally identifiable information (PII) or quasi-identifiers. When recording traffic from the university and company network, we only observe NX-traffic and store the queried domain names, omitting all other information including IP addresses that could be used as pseudonyms to correlate domain names queried by the same host. Thereby, we only obtain a list of domain names that occurred within the recording period, with no relation to users within the network. Additionally, we focus on NX-traffic because NXDs are less privacy-sensitive compared to resolving domain names, as they generally do not contain user-generated domains, with the exception of typo domains. Although the NXDs may still contain sensitive information about an organization as a whole (e.g., they could indicate possible business relationships between different companies), it is questionable to what extent and with what accuracy such information can be recovered, if at all possible. §.§ Preprocessing It is important to understand the applied domain name preprocessing as this step can introduce significant classification biases. The works (e.g., <cit.>) that operate on single NXDs for classification make the data used unique and filter all benign samples against OSINT feeds to remove potentially contained malicious domains before training and testing a classifier. Other than that, they do not apply any filtering to the benign-labeled data used, since it is captured from real-world networks. The argument for this decision is that this feeds the classifier with the queries that occur naturally in a network, and does not bias the classification performance in any direction since no filtering is applied. While the feature-based classifiers (e.g., <cit.>) start extracting predefined features from this data, the deep learning-based approaches (e.g., <cit.>) have to convert the domain names into a numerical representation in order to be able to feed them to a neural network. Most works (e.g., <cit.>) follow a similar approach, which mainly differs in the maximum acceptable length of a domain. First, all characters are converted to lowercase (which is an uncritical operation as the DNS operates case-insensitive) and every character is mapped to a unique integer. Additionally, the input is padded with zeros from the left side. The authors of the ResNet classifier <cit.> propose padding to the maximum domain length of 253 characters in order to be able to perform training and classification on every possible NXD while using batch learning. In this work, we follow these suggestions of related work on preprocessing. § EVALUATION OVERVIEW In this section, we describe our evaluation methodology, explain the decisions underlying the dataset generation process, and perform a result reproduction study of the classifiers from related work to verify our evaluation setup. §.§ Datasets & Methodology We create two disjoint datasets, one to train and test a set of state-of-the-art models (DSmod), and one to analyze different explainability methods and investigate biases (DSex). For each DGA in DGArchive, we randomly select 20,000 samples. If less than 20,000 samples are available per DGA, we select all samples. Then we split the samples for each DGA equally between the two datasets. For two DGAs, only five samples are available in the OSINT feed. We constrain that at least four samples are available for training classifiers within DSmod. Thus, for two DGAs (Dnsbenchmark and Randomloader), only one sample is contained in DSex.[We intentionally include underrepresented classes because the inclusion of a few training samples per class allows a classifier to detect various underrepresented DGAs with high probability that would otherwise be missed. At the same time, this does not affect a classifier's ability to recognize well-represented classes <cit.>.] Thereby, we are able to perform a four-fold cross validation stratified over all included classes using DSmod, resulting in four different classifiers being trained and tested. Finally, we select the same number of benign samples as we selected malicious samples, resulting in balanced datasets. In binary classification experiments, we use all benign samples and use the same label for all malicious domains, regardless of which DGA generated a domain. In multiclass classification experiments, we limit the amount of benign samples to 10,000 in order to have a more evenly distributed amount of samples between the various classes. Here we assign a separate label for each DGA. In total, DSmod and DSex each contain approximately 1.2 million domains derived from 107 different classes. We train all four classifiers in the four-fold cross validation with DSmod using early stopping with a patience of five epochs to avoid overfitting. These classifiers are then used to analyze different explainability methods and investigate biases using samples from DSex. This methodology allows us to conduct a study to reproduce the results of related work (using DSmod) as it replicates the classification setting used by the state of the art. In addition, we can evaluate four classifiers and 20 explainability methods on the same unseen data (DSex) and can assess whether the classifiers converge to similar local optima and whether the explainability methods provide stable results between different models. However, this methodology introduces spatial and temporal experimental biases <cit.>. Spatial bias arises from using an unrealistic ratio of benign to malicious samples in the test data. For the DGA detection use-case, most queried domains within a network are benign. This significant class imbalance can lead to base-rate fallacy <cit.> where evaluation metrics such as true-positive rate (TPR) and false-positive-rate (FPR) are misleading. Temporal bias is introduced by temporally inconsistent evaluations which integrate future knowledge about testing samples into the training phase. In the state-of-the-art classification setting, temporal bias is introduced in two ways: First, four-fold cross validation does not ensure that all training samples are strictly temporally precedent to the testing ones. Second, the benign and malicious samples in the datasets are not from the same time window (one-month real-world benign data compared to several years of DGArchive data). Thus, we conduct an additional evaluation under real-world conditions where we mitigate all experimental biases in Section <ref>. To this end, we make use of our second source for real-world data, the company network. In this context, we also assess whether classifiers generalize between different networks and are time-robust. §.§ State-of-the-Art Results Reproduction Before conducting the actual explainability study, we reproduce the results of related work to validate our evaluation setup. We use the same evaluation metrics as in the original papers: accuracy (ACC), true-positive rate (TPR), and false-positive rate (FPR) for the binary experiments, and f1-score, precision, and recall (which is equal to TPR) for the multiclass experiments. As suggested in <cit.>, we use macro-averaging to calculate the overall evaluation metrics because the available samples vary widely per DGA class. This way we do not skew the overall score towards well-represented classes. We present the averaged results of the four-fold cross validation in Table <ref>. The upper part of the table shows the results of the binary evaluation, the lower part those of the multiclass evaluation. By comparing these results with the values reported in the original papers, we can confirm that we were able to reproduce the results, as we arrive at very similar values. The last row of the table shows the results for an adapted model of M-ResNet aimed at making it more explainable. Recently, Bohle et al. <cit.> proposed a so-called B-Cos transform which, when interchanged with linear transforms of neural networks, increases the networks' explainability by promoting the alignment of weight-input during training. The alignment pressure on the weights ensures that the model computations align with task-relevant features and therefore become explainable. Since interchanging the linear transforms of the ResNet model with B-Cos transforms could introduce a trade-off between classification performance and explanatory fidelity, we also evaluate this model using DSmod and present the results in the last row of Table <ref>. Indeed, this modification slightly sacrifices model performance in favor of a more explainable model compared to the M-ResNet baseline. § EXPLAINABILITY METHODS As a secondary contribution to the critical analysis of automatic feature extraction for DGA detection, we conduct a comparative evaluation of different explainability methods. In this section, we briefly introduce explainability techniques for machine learning and present the results of the comparative evaluation. The exhaustive evaluation can be found in Appendix <ref>. In general, explainability methods can be divided into two categories: white-box approaches, which are model-specific and use knowledge, e.g, about the internal architecture and model weights of a neural network, and black-box approaches that are model-agnostic. In this work, we focus on white-box approaches as they have been proven to produce better results compared to black-box approaches <cit.>. The general idea of white-box approaches to deriving local explanations for input samples is to compute the gradients from the output back to the input. Thereby, for an input sample , a neural network N, and a prediction , a relevance vector is derived which describes the relevance of each dimension of x for the predicted label y. Thus, in terms of context-less DGA classification, an explainability method determines the relevance of each character in the context of its position for the assignment of an individual domain name to a particular class. When evaluating the explainability methods, we focus on the explanations generated for the predictions of a multiclass classifier because, unlike a binary classifier, it has a variety of other prediction possibilities in addition to distinguishing between benign and malicious. In this work, we make use of the iNNvestigate library <cit.> which implements many explainability methods and provides a common interface to evaluate 19 white-box approaches including Layer-wise Relevance Propagation (LRP) <cit.> using 12 different rules. In addition, we also evaluate explanations generated by the recently proposed B-Cos network adjustment <cit.>. Similarly to Warnecke et al. <cit.>, we evaluate the explainability methods based on four metrics: fidelity, sparsity, stability, and efficiency. Since we only evaluate white-box methods that compute relevance vectors directly from the weights of a neural network, all explainability methods are complete in that they are able to compute non-degenerate explanations for every possible input. In contrast to <cit.>, we evaluate a total of 20 white-box explainability approaches (compared to the three evaluated by Warnecke et al.) and extend the fidelity and stability metrics to be more suitable for analyzing DGA classifiers. Based on the four metrics, we select the top five techniques (b-cos, deeptaylor, integratedgradients, lrp.alpha2beta1, and lrp.zplus) for our bias investigation study in the next section. § INTERPRETING THE EXPLANATIONS Having decided on explainability methods, we can now examine the reasoning of the deep learning classifiers. To this end, we use the classifiers trained during the four-fold cross validation on DSmod to predict all samples of DSex, and then use all selected explainability methods to compute explanations. Subsequently, for each method and class, we use DBSCAN <cit.> to cluster the relevance vectors and group similar explanations together. Finally, we manually review the clusters to identify potential features of the deep learning classifiers. For each domain name and relevance vector, we visualize the importance of each character through heatmaps. We encode positive contributions to the predicted label as green colors and negative contributions as red colors. An example of the clustering and visualization of the relevance vectors generated by lrp.zplus for the Banjori DGA is shown in Fig. <ref>.[ Note that relevance vectors are not direct characteristics of individual inputs, but rather of the model that processes those inputs. By clustering the relevance vectors, we can still find clusters similar to those in Fig. <ref>, but in this case it might be more appropriate to first compute clusters based on other features such as n-gram embeddings. However, it is unclear what other features should be used to calculate such clusters (which brings us back to manual feature engineering) since, e.g., n-gram embeddings would not be useful for hex-based DGAs. ] In the following we present our findings from this study. We use the explainability methods to identify potential biases and then conduct various experiments to quantify the impact on classification. While some of these biases may seem obvious and easily avoidable, they are present even in DGA detection approaches proposed at leading security conferences (e.g., <cit.>). Moreover, these biases are rooted on subtle flaws that are rife in security research and affect many other use-cases as well <cit.>. §.§ Revealing Biases In this work, we mainly focus on the classification biases between the benign and the malicious class since the most severe danger in misclassification is that DGA-domains are wrongly labeled as benign. If a certain proportion of samples is incorrectly assigned to a DGA by a multiclass classifier, this has less impact because the domains are still detected as malicious. The main incentive for an adversary would be to exploit biases to force a detection system to classify DGA-domains as benign, allowing communication with botnets. Therefore, we consider the threat model, which attempts to mask domains as if they were generated by another DGA, to be less reasonable. In total, we identified five biases present in current state-of-the-art classifiers that provide a false sense of security, as they can be easily exploited to evade detection.[While we analyzed the ResNet-based classifier in detail, we verified that the identified biases are also exploitable in the LSTM-based <cit.> and the CNN-based classifier <cit.>.] Moreover, biases inherent in a classifier can affect the classifier's ability to detect yet unknown DGAs. §.§.§ Length Bias Across all explainability methods and across many clusters, dots included in a domain name are often calculated as particularly important for the classification. We reckon that the dots themselves are not important in isolation, but that the deep learning classifiers infer the features of domain length and number of subdomains from it. To assess the importance of this feature, we conduct the following experiment: First, we chose the Qadars DGA as it generates domains of a fixed length and is correctly attributed by M-ResNet most of the time (f1-score of 0.99400). In detail, all domains generated by Qadars match the following regular expression (regex): , i.e., Qadars generates domains with a fixed length of 12, using only the characters a-z and 0-9, and finally adds a dot and one of four possible top-level domains (TLDs). Then, we adapt the reimplementation of Qadars[<https://github.com/baderj/domaingenerationalgorithms>] to generate domains of all possible lengths. Note that each domain name identifier can be a maximum of 63 characters long before it must be separated by a dot, and the full domain name can be a maximum of 253 characters long. For each possible length and for each known seed (six in total), we generate at most 100 different domains, resulting in a dataset size of around 147,000 unique samples. For each sample, we always fill in the highest level subdomain with characters before adding a dot. Finally, we feed the generated domains into the M-ResNet classifier and observe the percentage of classifications assigned to Qadars, any other DGA, and the benign class depending on the domain length. In Fig. <ref>, we display the results of this experiment. The percentage of classifications assigned to Qadars increases with domain length, peaking at the original domain length of 12, and then falls abruptly from there. As the domain length increases, the percentage increases slightly because the classifier has more information to derive the correct prediction. Most of the time, however, the classifier assigns the samples to different DGA classes. The percentage of benign classifications increases rapidly from the length of 69, 133, and 197. This is because at these lengths additional subdomains must be included to form a valid domain. The more dots, the more benign classifications. Sometimes even more than 50% of all classifications are assigned to the benign class. After the dots are inserted, the benign classifications decrease with increasing domain length as more information generated by the DGA is available for prediction. Investigating the sample length distribution of the classifiers' training set illustrates the problem that with increasing length, more domains are classified as benign. In Fig. <ref>, we display two box plots of the domain length distribution for the benign and malicious classes. The maximum domain length of a DGA-labeled sample within the training set is 59. Thus, it is very likely that a classifier learns to assign a sample to the benign class with greater probability if it exceeds 59 in length. Fortunately, this is not the only feature on which classification depends. Since the domain length depends on the number of dots/subdomains, we examine this bias below. §.§.§ Number of Dots/Subdomains Bias As seen in the previous section, the number of dots/subdomains has a significant impact on the classification. Looking at the number of dots contained in the training set separately for the benign and malicious classes, we can see that the benign class contains significantly more dots. The average number of dots is 7.12, the median is 5, and the maximum is 35. In comparison, the average for the malicious class is 1.08, the median is 1 and the maximum is 2. In fact, only 19 DGAs generate domains with more than one dot and only two DGAs (Beebone and Madmax) have dots past their effective second-level domain (e2LD). We refer to e2LD here because some DGAs use dynamic DNS services or public suffixes, which should not be counted as their generated second-level domain. §.§.§ www. Bias In connection to the number of dots/subdomains bias we observed during our manual review of the relevance vector clusters for the benign class, that over all explainability methods, clusters have formed which highlight the importance of the “www.” prefix. Examining the distribution of domains with the prefix “www.” within the training set, we find that the benign class contains 3,382 (0.00288%) samples, while the malicious class contains only 183 (0.00016%) samples. To assess the impact of this bias, we perform the following experiment: We take the four binary classifiers of the four-fold cross validation and all the malicious samples that the classifiers have correctly classified (true-positives). Then we prepend the “www.” prefix to all true-positives and reevaluate the models on these samples. On average over all folds, 434,916 (74.23%) out of 585,907 true-positives became false-negatives, while only 150,991 were still correctly classified. This shows that there is a huge bias regarding this prefix and malware authors could exploit this issue by simply prepending “www.” to their generated domains in order to evade detection of state-of-the-art classifiers. Although, only a small fraction of all samples have the “www.” prefix, it can introduce bias into classification if the feature is sufficiently discriminatory. §.§.§ Top-Level Domain Bias Through our study, across all explainability methods and across multiple classes, we encountered multiple occurrences of clusters that, in combination with other features, highly value the top-level domain (TLD) as a significant feature. To assess the impact of this feature, we make use of out-of-distribution (OOD) testing, as it was identified to be one of the most effective ways to reveal biases <cit.>. To this end, we perform a leave-one-group-out evaluation. In detail, similarly to the four-fold cross validation, we train a classifier for every fold on the respective fold's training data of DSmod, except that we omit all samples of a particular class. Then, we use the four trained classifiers to predict all samples of the left out class contained in DSex. As an example, we present the results obtained on the Mirai DGA leave-one-group-out evaluation. All samples generated by Mirai use one of these three TLDs: online, support, and tech. In each fold all Mirai samples that use the online and tech TLD are predicted to be malicious while all samples with the support TLD are labeled as benign. It seems that this is because the classifier tends to classify samples with never-seen TLDs into the benign class. Omitting all Mirai samples from training has the effect of removing all samples that use the support TLD from the entire training set. Although there appears to be enough information within the second-level domain to correctly assign a sample to the malicious class (as 100% of all online TLD samples are correctly assigned), the classifier is biased due to the unknown TLD to attribute the samples to the benign class. Similar pictures emerge also for a variety of other DGAs. Examination of the TLD distribution within the training set supports this statement. There are 413 distinct TLDs in the benign data, of which 274 are unique to benign samples. In comparison, there are only 258 different TLDs within the malicious labeled data, of which 115 are uniquely used by malicious samples. On the other hand, all samples with the tech TLD were also correctly labeled as malicious although this TLD was completely removed from the training data. Since all support TLD samples are misclassified and all samples use the same generation algorithm, it is unlikely that the information within the second-level domain was discriminatory enough for the tech TLD samples. Analyzing the calculated relevance vectors for these samples revealed that the classification is significantly influenced by the “ch” suffix of the tech TLD. Looking at the ch TLD distribution within the training data it becomes apparent why this is the case: there are 2063 ch TLDs within the malicious samples and only 51 within the benign samples. This bias investigation delivers two results: First, state-of-the-art classifiers heavily depend on the TLD, resulting in the fact that a malware author could simply change the TLD used to evade detection. Second, it might be useful to encode the TLD as a one-hot encoded vector before inputting it to a classifier since it is rather a categorical feature. In the case of the Mirai evaluation, this was a stroke of luck for the defender site. However, since the TLD can be freely chosen, an attacker could exploit this knowledge to evade detection. §.§.§ Validity/Diversity Bias During our study, we encountered several large benign clusters that contain domains that are invalid and therefore would not resolve (e.g. due to an invalid or missing TLD). In fact, 7.64% of all benign samples within the training set are invalid, while all malicious samples are valid. An attacker has no incentive in generating invalid samples, as they would be useless for establishing connections between bots and their C2 server. Thus, a classifier most likely learns the shortcut to distinguish domains based on their validity. Although this is not a true bias, since invalid domains cannot be resolved and therefore assigned to the benign class, it does have an impact on the reported FPR of state-of-the-art classifiers as invalid samples are probably easier to classify. While there is nothing wrong in calculating the FPR for the detection system which pre-filters invalid domains to the benign class, here the classifiers real true-negative rate (TNR) is artificially inflated. Furthermore, including invalid samples in the training sets carries the additional risk of the classifier focusing on useless information and prevents the classifier from learning more complex features that might be useful in separating valid benign samples from malicious ones. In addition, we found several benign clusters specific to the network in which the data was collected (e.g., domains including the official e2LD of the university). Training and evaluating classifiers on this data could lead to misleadingly high results, as the classifiers may have only learned to separate network-specific domains from malicious ones, but they do not generalize between different networks. § MITIGATING BIASES Now that we have identified several biases, we present strategies to mitigate them. In addition, in various experiments, we measure the cost in terms of loss in classification performance for avoiding biases, since biases are nothing more than features that appear in the training data. For instance, biases such as the TLD are perfectly valid signals for the classifier to learn based on the underlying data distribution, since such features can be used to some extent to distinguish between benign and malicious samples. However, this is not desirable for features that can be easily modified by an attacker, as they can be exploited (e.g. by exchanging the TLD) to evade detection. Finally, in a real-world study, we measure the true classification performance of DGA classifiers that are free of the identified biases, and evaluate whether a classifier generalizes to different networks and is time-robust. In other words, here we evaluate whether a classifier is free from biases that might be introduced by artifacts in specific networks and at certain times. §.§ Mitigation Strategies In the following, we address the individual biases and suggest how to mitigate them. §.§.§ Number of Dots/Subdomains, www., and TLD Biases As demonstrated in the previous section, these biases can be easily exploited by an attacker to evade detection. Adding the “www.” prefix to malicious domains converted around 75% of true-positives into false-negatives, while selecting a TLD that was never seen by a classifier during training allows for complete bypass of detection. Since the botmaster's authority over a domain starts with the e2LD and all other subdomains as well as the TLD can be freely selected, we suggest to perform the classification exclusively on the e2LD and to omit all other information. Note that this does not open up any new attack vector, but may remove valuable features that could be used for classification, resulting in a decrease in overall classification performance. Hence, in Section <ref>, we measure the trade-off between bias-reduced classification and performance. §.§.§ Validity/Diversity Bias Since invalid samples can be pre-filtered and assigned to the benign class, we choose to only train a classifier on valid domains, allowing the classifier to focus on task-relevant features. As a result, the FPR of the classifier reported by us is likely to be larger than that reported by related work, since the classifier does not encounter easily classifiable invalid samples during testing. Further, to mitigate the problem that a classifier only learns to separate network-specific domains from malicious ones, we focus on diverse data by training on unique e2LDs. In doing so, we aim to train classifiers that generalize well between different networks. Focusing solely on unique e2LDs has the effect that the underlying sample distribution changes fundamentally. Training using this data will again increase the classifier's FPR since a e2LD occurs only once, either in the training or test set. In contrast, in the state-of-the-art classification setting, a large proportion of unique domains with the same e2LD occur, which may be network-specific, such as domains that contain the university's official e2LD. Once the classifier learns of a benign e2LD, samples with the same e2LD can be easily assigned to the benign class. §.§.§ Length Bias Focusing exclusively on valid and diverse e2LD already significantly equalizes the length distribution between benign and malicious samples and almost mitigates the bias. In Fig. <ref>, we show two box plots of the unique and valid e2LD length distributions for the benign class and malicious samples. In comparison to the sample length distributions in the state-of-the-art classification setting (cf. Fig. <ref>), the e2LD length distributions are much more similar. Unfortunately, thereby the length bias cannot be fully mitigated. The classifier will probably still tend to classify longer samples towards the benign class. However, as we saw during the length bias experiment, longer samples contain more information that helps the classifier make the correct decision. Thus, for an adversary, increasing the domain length is more of a trade-off between exploiting length bias and providing too much information to the classifier. Note, reducing the domain length of input samples to mitigate this bias is not a viable option, as this opens up a new attack vector where an attacker can hide features that would have sorted a domain into the malicious class. On the other hand, it is possible to generate additional artificial domains by adapting publicly available reimplementations of DGAs (similar to the length bias experiment) to balance the length distributions and thus mitigate the bias completely. However, this may require oversampling of benign data and care must be taken to ensure that this does not affect classification performance on clean data. Since the focus on valid and diverse e2LD almost evens out the distributions, we decided against it. §.§ Bias Mitigation Experiments In the following, we measure the cost in terms of loss in classification performance for avoiding biases. We expect classification performance to deteriorate because biases are nothing more than features based on the underlying distribution of the training data. All experiments are similar to the four-fold cross validation performed in Section <ref>, except that here we focus on diverse data. To this end, we first map all fully qualified domain names (FQDNs) to their e2LDs. We then randomly sample the e2LDs and then select exactly one sample per unique e2LD for each evaluation scenario. For binary and multiclass classification, we examine four scenarios each: classification on valid and diverse FQDNs, on FQDNs without TLDs (no TLDs), on FQDNs without subdomains (e2LDs + TLDs), and exclusively on e2LDs. In the upper part of Table <ref>, we present the results for the binary setting while the lower part of the table displays the results for the multiclass setting. For convenience we also show the performance of the classifiers in the state-of-the-art classification setting from Section <ref>. As suspected, when only valid and diverse samples are used, the performance of the binary classifier is significantly worse, especially with respect to the FPR. Removing the TLDs from the FQDNs has less of an impact on performance than removing all subdomains after the e2LD. However, in both scenarios the loss in performance is tremendous, increasing the FPR to about 7.1% - 7.6%. Classification solely on the e2LD delivers the worst results reaching a 89.1% TPR @ 10.5% FPR for the decision threshold of 0.5. Examining the individual TPRs for each DGA, we find that the rate drops significantly for some DGAs, while for others it remains high, even reaching 100%. Although the average TPR drops significantly compared to the state-of-the-art setting, we expect that most DGAs could still be detected as they query multiple domains before finally resolving a registered domain. Provided that a decision is not made on the basis of a single query. Only the DGAs Redyms and Ud3 would be completely missed as for these DGAs the TPRs are zero over all four folds. In the multiclass setting, classification performance is not affected as much when trained on valid and diverse FQDNs. This is because focusing on these samples mainly affects the benign class and a few DGA classes that have a small sample size and generate FQDNs that map to the same e2LD (e.g., they generate domains with the same e2LD but with different TLDs). However, most DGAs are not affected by this. In contrast to the binary setting, here the TLDs are more relevant for classification than the subdomains after the e2LD. If only the e2LDs are used for classification, the performance deteriorates drastically (mainly because of the missing TLDs). Removing all subdomains after the e2LD affects only two DGAs: Beebone and Madmax. However, when the subdomains are removed, there is still enough information in their domain names to classify them correctly most of the time. Beebone's f1-score drops slightly from 97.7% to 95.7%, and Madmax's from 74.9% to 60.2%. In summary, the TLD is vital for the multiclass classification. In the binary setting, classifying exclusively e2LD is as bias-free as possible but the achieved performance does not seem to be acceptable. However, the effective TPR@FPR operation point of a detection system that pre-filters invalid samples and classifies all input samples regardless of the uniqueness of their e2LD can still be acceptable. In the next section, we get to the bottom of this question. §.§ Real-World Study In this section, we perform a real-world study to assess the true performance of bias-reduced DGA binary classification. In this context, we evaluate whether the classifiers generalize between different networks and are time-robust. Simultaneously, we enforce that the evaluation is free of experimental biases. In the following, we refer to classifiers that mitigate the identified biases as bias-reduced classifiers. To this end, we train a classifier using the real-world benign e2LDs from the university network recorded from mid-October 2017 to mid-November 2017, as well as DGArchive data that was available until the end of the recording period. In detail, DGArchive contains approximately 53 million unique domains generated by 85 different DGAs up to this point in time. Training a classifier using a dataset which is similar to DSmod, but with the constraint that the malicious samples are from the same time window as the benign samples, mitigates one of the two experimental temporal biases included in the state-of-the-art classification setting. To mitigate the second experimental temporal bias, that requires that all training samples are strictly temporally precedent to the testing ones, we evaluate the classifier on approximately 311 million benign e2LDs captured in the company network in April 2019 (cf. Section <ref>) and DGA-domains from DGArchive that were generated by DGAs in April 2019. Within April 2019, 46 DGAs (four of which were unknown at the time of the training) generated approximately 1.2 million domains. In this way, we eliminate the experimental temporal biases, and can guarantee that the benign samples come from different networks and that the time interval between the occurrence of the training and the test samples is about 17 months. To eliminate the experimental spatial bias, it is required to approximate the true ratio of benign to malicious samples in the test data. Since the true sample distribution is unknown, we conduct two experiments to estimate the true detection performance of bias-reduced DGA binary classification. First, we evaluate the classifier using all 311 million benign e2LDs and gradually increase the amount of included malicious test samples generated in April 2019 from 1% to 100% for each DGA. Thereby, the ratios between the domains generated by the different DGAs follow the true distribution. In the following, we report the obtained results of the classifier that first checks whether a sample is invalid. If it is invalid, the sample is ignored. Otherwise, it is evaluated by the classifier. In Fig. <ref>, we display the TPRs for fixed FPRs between [0.001,0.008] for the bias-reduced classifier depending on the contamination of the test set (i.e., the relative amount of included malicious test samples from April 2019). The achieved TPRs are nearly stable for all fixed FPRs, showing that no base-rate fallacy is measurable within these ratios of benign to malicious samples. We argue this is because the benign data heavily overshadows the malicious data even when we include 100% of all DGA-domains from April 2019. In this experiment, the relative percentage of malicious samples varies between 0.00362% and 0.35998%, which means that in the worst case, 99.64002% of the test data is still from the benign class. As it is unclear, how many DGAs are present in a real-world network, we additional conduct a second experiment to estimate the worst-case classification performance. Here, for each DGA, we evaluate the classifier using all malicious samples generated in April 2019 of that particular DGA and all 311 million benign e2LDs. In total, we thus evaluate the classifier using 46 test sets, since there are 46 DGAs that generate at least one domain in April 2019. On average the bias-reduced classifier achieves a TPR of 0.85735 at a FPR of 0.00506 for the decision threshold of 0.5. In Fig. <ref>, we display the receiver operating characteristic (ROC) curve averaged over all evaluation runs for the FPR range of [0,0.01]. In addition, we also show the ROC curves for the best-detected DGA (Dyre) and the worst-detected DGA (Nymaim2). We argue that the classifier is remarkable time-robust and generalizes well to different networks. The temporal and spatial changes in data distribution have increased the FPR compared to the state-of-the-art setting at the decision threshold of 0.5. However, this was to be expected as the distribution of benign samples naturally varies between networks, at least to some degree. Moreover, the classifier is able to achieve a slightly lower TPR as the bias-reduced e2LD classifiers from the previous section. Surprisingly, for three of the four DGAs that were unknown at the time of training (Ccleaner, Tinynuke, Wd), the bias-reduced classifier is able to correctly classify 100% of all generated samples. Only the Nymaim2 DGA is detected worse with a TPR of 14.84%, which is the main reason for the slightly lower average TPR compared to the bias-reduced e2LD classifiers from the previous section.[ We additionally evaluated the four e2LD classifiers from the previous section against the 311 million benign NXDs and all DGA-domains from DSex (which are completely disjoint with the training samples) to evaluate the performance using all 106 known DGAs. Thereby, we arrive at very similar results. We present the corresponding ROC curves in Appendix <ref>. Note that this of course reintroduces experimental temporal bias. ] At a fixed FPR of 0.008 the bias-reduced classifier achieves a TPR of about 89%. In practice, it might be advantageous to set the threshold to a lower fixed FPR value. Setting the FPR at 0.001 to 0.002 would still allow an approximate detection rate of about 67% to 78%. However, how useful this is depends on what is done with the classification results. Context-less DGA detection was never intended for single-domain based decision-making. This evaluation assessed the true performance of bias-reduced DGA classifiers and demonstrated the limits of what is possible without contextual information. § BIAS-REDUCED DGA CLASSIFICATION In this section, we use the insights gained from the bias mitigation and the real-world study to propose a classification system that (1) is as bias-free as possible and (2) does not miss entire DGA families. Further, we propose an approach to improve visualization support to increase trust in and transparency of detection methods and facilitate decision-making. §.§ Bias-reduced DGA Classification System As previous evaluations have shown, bias can be easily exploited to evade detection. Focusing exclusively on e2LD helps mitigate most identified biases. However, this causes the classifier to lose the ability to recognize specific DGA families as a whole. In the case of multiclass classification, we have seen that the classification relies heavily on information outside of the e2LD to correctly assign domains of multiple classes. In the following, we present a detection system that counteracts these issues. In Fig. <ref>, we visualize the system's architecture. In the first step, the detection system evaluates whether the entered NXD is invalid or not. If it is invalid, it is ignored, otherwise the input sample is passed to the binary classification step. Here, two classifiers work in parallel: a bias-reduced classifier that classifies the e2LD of the input sample, and a full classifier that uses the FQDN. This classification step can lead to four possible outcomes: First, both classifiers agree on the benign label, so the detection system also outputs benign. Second, the bias-reduced classifier outputs malicious while the full classifier predicts benign. This is an indication that an attacker might try to exploit biases to evade detection. Third, the bias-reduced classifier predicts benign and the full classifier malicious. This suggests that the features outside the e2LD may be indispensable to detect the DGAs that the bias-reduced classifier would miss. And fourth, both classifiers agree on the malicious label indicating that the input sample is very likely DGA-generated. Regardless of the results, the input sample can be passed to a multiclass classifier trained on FQDNs to associate the sample with the DGA that most likely generated it. Finally, we propose to pass the input sample associated with the classification results to a visualization system to understand the classifier's reasoning and to support the decision-making process. Using this detection system, we achieve bias-reduced DGA detection and do not miss entire DGA families. §.§ Visualization Support The proposed detection system gets the most out of context-less and bias-reduced DGA classification. In order to facilitate decision-making and to better understand the reasoning of a classifier we propose a visualization system. In this work, we demonstrated the limits of context-less classification and showed that decision-making based on the classification result of a single query is practically insufficient. To make a decision based on multiple classification results, the minimum information required is the mapping between the host and the queried domains. While this information may not be available to a CaaS provider, the network operator that uses the service most likely has this knowledge. In the following, we only use this additional knowledge to facilitate the work of SOC analysts. Fig. <ref> shows the different views of the proposed visualization system based on mock data. Two main view groups summarize the classification results: the global and the local views. Both contain the queried domain names, in which the relevance of each character to the prediction is highlighted using a heatmap. In this example, we used integratedgradients to compute the relevance vectors for the predictions of the multiclass model. However, any other explainability method can be chosen. In addition, we display the total amount of times the domain was queried as well as the classification results from the bias-reduced, full binary, and multiclass classifier. The global view summarizes all classification results for the entire network and allows finding multiple hosts infected with the same malware. The local view summarizes the results for a single host and allows targeted analysis of all queries performed by that host. Local views can be accessed through the Recent Classification Results by Client view, which displays the total and relative number of domains classified as benign or malicious per host. From both, the global and the local view, it is possible to analyze how often and which hosts queried a particular domain. Additionally, for each domain, it is possible to analyze the clusters in which the relevance vector falls and to extract a simple regex that fits all samples within the cluster. In this way, it may be possible to identify multiple hosts infected with the same malware. § ADDITIONAL UTILIZATION OF THE KNOWLEDGE GAINED As a secondary contribution, we use the knowledge gained in the previous evaluations to improve the state-of-the-art deep learning and feature-based multiclass classifiers in terms of classification performance and efficiency. In this section, we therefore take a step back from improving the generalization of classifiers by removing classification biases and briefly turn our attention to improving the performance and efficiency of the classifiers themselves. §.§ Improving M-ResNet In this work, we mainly improved the binary classifier B-ResNet by mitigating identified biases. Now we also take a closer look at the multiclass classifier M-ResNet. In Section <ref>, we noted that the classifier does not use the TLD as a standalone feature, but also derives additional features from the character distribution. Since the TLD can be freely chosen by the adversary and the TLD is more of a categorical feature, we adapt the M-ResNet model to classify a domain by using the one-hot encoded vector representation of the TLD instead of the character-wise encoding. Thereby, we aim to improve classification performance by allowing the classifier to focus on the more important part of the FQDN. Furthermore, this has the effect that other implicit features, such as domain length, are no longer affected by the chosen TLD. We evaluated this model using a four-fold cross validation on DSmod but could not measure any significant improvement. As could be seen in the relevance vector cluster analysis, the original model appears to have a large enough capacity to learn the correct extraction of the TLD from the characters. Furthermore, the characters within the TLD do not appear to significantly affect the multiclass classifier. Since overparameterization has been associated with a higher susceptibility to learning spurious correlations <cit.>, we attempt to iteratively reduce the complexity of the adapted model. As a result, we were able to successfully remove the last four residual blocks and reduce the number of trainable parameters by 35.5% without affecting classification performance (f1-score of 0.78691). Thereby, we additionally improved the model's carbon footprint and reduced the required time for training and inference. §.§ Improving EXPLAIN Now we try to improve the feature-based multiclass classifier EXPLAIN by using knowledge extracted by explainability methods applied on M-ResNet. To this end, we cluster relevance vectors for samples which are correctly classified by M-ResNet but incorrectly by EXPLAIN, targeting the identification of features that are missing in EXPLAIN. We attribute the performance difference between both classifiers to four findings: (1) ResNet seems to handle imbalanced data and class weighting better, (2) for some DGAs, M-ResNet is simply better at guessing, (3) M-ResNet is able to learn complex features through a series of non-linear transformations that are not easily understood by a human, and (4) both classifier converge to different local optima and thus tend to assign similar samples to either one or the other class. §.§.§ Imbalanced Data Investigating the relevance vector clusters for the Redyms DGA, it is immediately apparent that for M-ResNet, the “-” character is useful for the correct classification. Although, the feature that counts the “-” character is defined in EXPLAIN's source code, it was not selected during the feature selection process. We reckon, that this is because the feature is only important for a few classes but other features are important for a much higher number of classes which resulted in lower importance score during the feature selection process. This problem could be the reason why several classes are recognized worse by EXPLAIN, and suggest that M-ResNet might be better with imbalanced data and class weighting in general. In contrast to EXPLAIN's feature selection step, we assume that M-ResNet does not completely remove self-learned features, but fine-tunes the importance by adjusting the weights. Adding the “-”-feature to EXPLAINs feature set improves the f1-score for the Redyms DGA by 53.15% and brings the detection rate to a level similar to that of M-ResNet. §.§.§ Random Guessing EXPLAIN mostly confuses the samples of Ud4 with Dmsniff. Analysis of all samples from both classes revealed that both DGAs generate 100% identical domains, so they are most likely the same DGA. Upon inquiry to DGArchive this was confirmed and in the future the feed of Ud4 will be discontinued. Here, M-ResNet is just better at guessing (by an f1-score of 16.48%). §.§.§ Complex Features We cannot exclude the possibility that M-ResNet is able to learn complex features through a series of non-linear transformations that are not easily understood by a human. For instance, related work <cit.> suggests that the ResNet classifier may be able to distinguish, at least to some degree, between underlying pseudo-random number generators. To improve EXPLAIN, we adapt the features related to randomness tests and add all of them to the final feature set. In detail, we adapt the 14 randomness tests from  <cit.> to include the final p-values used for the decision of whether a certain randomness test is passed instead of only the result of the test. Reevaluating the model with all additional features, we could measure a small improvement of 0.783% in f1-score. §.§.§ Different Optima Most other DGAs that are confused by EXPLAIN generate similar domains, and often all domains match the same regexes. EXPLAIN is significantly better (> 10% in f1-score) than M-ResNet in four DGAs, whereas M-ResNet is also significantly better in four other DGAs. We reckon that both models converge to different local optima and thus tend to assign similar samples to either one or the other class. §.§.§ Overall Results We were able to improve EXPLAIN from an f1-score of 0.76733 to 0.77516 by adding additional features to EXPLAINs feature set, bringing it closer to the performance of deep learning classifiers such as M-ResNet. § OTHER RELATED WORK We already discussed related work on DGA detection in Section <ref>. Consequently, we focus here on related work on explainability and bias learning prevention. For the DGA detection use-case, there are only a few works that partially address the explainability of detection systems. Drichel et al. <cit.> proposed the multiclass classifier EXPLAIN as a feature-based alternative to deep learning-based classifiers. While feature-based approaches often seem inherently explainable, it is often not easy to interpret their predictions. For instance, EXPLAIN's predictions are based on the majority vote of 360 decision trees with a maximum depth of 43 and a random mixture of 76 features that include several statistical features that are difficult for a human to analyze. The authors of <cit.> also adopt a feature-based RF classifier based on the EXPOSURE system <cit.> and mainly use SHAP <cit.> to derive explanations. However, their approach relies heavily on extensive tracking of DNS traffic and is unable to derive explanations in the multiclass classification setting. None of these works investigate biases inherent in detection methods. To the best of our knowledge, this is the first work to critically analyze the features used, focusing on their limitations and unintended consequences for the DGA use-case. In addition, related work <cit.> has identified several general measures to mitigate bias learning that can also be applied here. Changing the loss function <cit.> and adding regularization terms <cit.> can force a classifier to learn more complex features instead of focusing on simple biases. Also, the learning rate of the optimizer can be adjusted to make the classifier learn either simpler or more complex features <cit.>. Somewhat related is the issue of adversarial attacks and the robustness of classifiers. Here, semantic gaps in the data create blind spots in classifiers which make them susceptible to small input perturbations that lead to misclassifications. Adversarial training can be used to prevent such classification shortcuts <cit.>. In context of DGA detection, several works deal with this topic . § CONCLUSION In this work, we showed how XAI methods can be used to debug, improve understanding, and enhance state-of-the-art DGA classifiers. To this end, we performed a comparative evaluation of different explainability methods and used the best ones to explain the predictions of the deep learning classifiers. Thereby, we identified biases present in state-of-the-art classifiers that can be easily exploited by an adversary to bypass detection. To solve these issues we proposed a bias-reduced classification system that mitigates the biases, achieves state-of-the-art detection performance, generalizes well between different networks, and is time-robust. In this context, we measured the true performance of state-of-the-art DGA classifiers, showed the limits of context-less DGA binary classification, and proposed a visualization system that facilitates decision-making and helps to understand the reasoning of deep learning classifiers. Finally, we used the knowledge gained from our study to improve the state-of-the-art deep learning as well as feature-based approaches for DGA multiclass classification. In future work, the usefulness of the visualization system needs to be evaluated, preferably in an operational environment. A promising future research direction is the combination of context-less and context-aware systems to further enhance detection and decision-making. § AVAILABILITY We make the source code of the machine learning models publicly available[<https://gitlab.com/rwth-itsec/explainability-analyzed-dga-models>] to encourage replication studies and facilitate future work. The authors would like to thank Daniel Plohmann, Simon Ofner, and the Cyber Analysis & Defense department of Fraunhofer FKIE for granting us access to DGArchive as well as Siemens AG and Jens Hektor from the IT Center of RWTH Aachen University for providing NXD data. ACM-Reference-Format § EVALUATING EXPLAINABILITY METHODS We evaluate the explainability methods using four metrics: fidelity, sparsity, stability, and efficiency following <cit.>. Since we only evaluate white-box methods that compute relevance vectors directly from the weights of a neural network, all explainability methods are complete in that they are able to compute non-degenerate explanations for every possible input. To evaluate the explainability methods we use the four classifiers trained on DSmod during our results reproduction study and predict all samples from DSex. For each metric, we average the results across all classifiers. §.§ Fidelity The first evaluation criterion is fidelity, which measures how faithfully important features contribute to a particular prediction. We adopt the Descriptive Accuracy (DA) metric from <cit.>, which measures for a given input sample x how removing the k-most relevant features change the original neural network's prediction . The idea behind this metric is that as relevant features are removed, accuracy should decrease as the classifier has less information to make the correct prediction. The better an explanation, the faster the accuracy decreases as the removed features capture more context of the predictions. Thus, explainability methods that show a more rapid decline in DA when removing key features provide better explanations than explainability methods with a more gradual decrease. In context-less DGA classification, removing an input feature corresponds to removing a character from a domain. Here, we consider two scenarios: (1) removing a character and thus reducing the total domain length, and (2) replacing a character with the padding symbol and thereby retaining the original domain length. Both approaches have drawbacks: removing a character can have a greater impact on accuracy because it also affects the implicit feature of domain length. On the other hand, preserving the domain length by replacing the character with the padding symbol may confuse a classifier, as the classifier was never faced with such samples during training. Hence, we calculate the average DA for both scenarios and on all samples of DSex for k∈[1,10]. To derive a single score, we compute the Area Under the Curve (AUC). The smaller the score, the better the explanations. Results: In Table <ref>, we show the results for this criterion. For further evaluation we choose integratedgradients as it scores best when removing the top k-features and b-cos as it achieves the best score in the second scenario. In addition, we also select lrp.zplus since it obtains the best scores when replacing features on the unmodified M-ResNet model. §.§ Sparsity An explanation is only meaningful if only a limited number of features are selected as the explanation result to make it understandable for a human analyst. To measure the sparsity of an explanation, we follow the Mass Around Zero (MAZ) criterion proposed in <cit.>. First, for every sample, we calculate the relevance vector r = (r_0,...,r_n), normalize the absolute entries of r to the range [0,1], and fit it to a half-normalized histogram h. Then, we calculate the MAZ by . Finally, we compute the AUC to derive a single score. Sparse explanations have a steep increase in MAZ around zero and are flat around one because only few features are marked as relevant. Conversely, explanations with many relevant features have a smaller slope close to zero. Therefore, the higher the AUC score, the sparse the explanations. Results: In the third column of Table <ref>, we show the results for this criterion. We select lrp.alpha2beta1 for further evaluation as it shows the best sparsity for explanations. However, high sparsity is only useful if the most relevant features are correctly determined. Therefore, we also investigate Sparsity * (1-Fidelity) and display the results in the fourth column. Depending on the fidelity, integratedgradients shows the most sparse explanations. §.§ Stability An explainability method is stable if it provides the same explanation for a given input over multiple runs. Since we only evaluate white-box approaches which calculate the relevance vector deterministically, all methods are stable. However, here we still want to evaluate the stability of the explainability methods over different model weights, i.e., whether the explainability methods calculate similar explanations via different model weights. Assuming that all models converge to similar local optima, it is conceivable that they learn the same features that are similarly relevant to predictions of specific classes. Note that this need not be the case as there may be multiple highly predictive features for a single class. However, we believe this is an important criterion, as it is beneficial when deriving explanations in an operational environment that the security analyst is presented with similar explanations for the same classes after a model update, e.g., after the inclusion of a newly emerged malware family, as before the model update. Otherwise, the new explanations would confuse rather than help the analyst. The standard deviation of the f1-score across the four folds is low at 0.00552, which may indicate that the classifiers are converging to similar local optima. To evaluate this criterion, we first compute the average of the standard deviation values (std) for each entry of a relevance vector across all folds for all domains. Then, we average these values to derive a single score, with smaller values corresponding to more similar explanations across different model weights. Results: The fifth column of Table <ref> shows the results for this criterion. The two methods which achieve the best results by far are deeptaylor and lrp.zplus. Both methods also achieve high fidelity scores (deeptaylor is second best in the feature remove setting and lrp.zplus is best on the unmodified M-ResNet model in the feature replace setting), which may indicate that the models learn the same most predictive features for the same classes. On the other hand, integratedgradients achieves the best fidelity score in the feature remove setting and only performs moderately well in terms of stability. This could be due to the fact, that in contrast to the other two methods, integratedgradients shows a significantly higher sparsity, which could indicate that there may be multiple highly predictive feature combinations for the same classes. We add deeptaylor to the list of methods to be evaluated further. However, the results of this criterion should be treated with caution, as they depend heavily on what a model has learned. Since we use the same models for all explainability methods, this criterion still allows us to compare explainability methods in terms of whether they provide similar explanations through different model weights. §.§ Efficiency We follow the definition of efficiency in <cit.>, which states that a method is efficient if it does not delay the typical workflow of an expert. To evaluate this criterion, we measured and averaged the times to compute the explanations during the previous experiments. Results: In the last column of Table <ref> we display the average time in seconds for computing a single explanation for a prediction. All methods are sufficiently fast that we do not select any method based on this criterion. B-cos, integratedgradients, and smoothgrad are around on order of magnitude slower than the other approaches. For B-cos this is the case as the current implementation does not support batch calculations to derive explanations. For integratedgradients and smoothgrad this is because we had to reduce the batch size of 2,000 samples to 200 due to higher RAM requirements of the algorithms. Nevertheless, even without batch calculations all methods are sufficiently fast and would not delay the workflow of an expert. §.§ Comparison of Explainability Methods We briefly document our findings of using different explainability methods during our evaluations: While lrp.alpha2beta1 often provides very sparse explanations, it occasionally seems to fail, sometimes just flagging features that argue against the prediction even though the classifier is very confident. We cannot justify the loss of performance caused by the required adjustment to the state-of-the-art M-ResNet model for the explanations generated by b-cos, since the explanations are not significantly different from the other methods. The three best performing explainability methods through our study are deeptaylor, integratedgradients, and lrp.zplus. All three can be used to explain the predictions of deep learning classifiers for the DGA classification use-case. However, integratedgradients seems to provide sparser explanations compared to the other two methods. § ADDITIONAL ROC CURVES OF THE REAL-WORLD STUDY
http://arxiv.org/abs/2307.05943v1
20230712061829
Empirical Bayes large-scale multiple testing for high-dimensional sparse binary sequences
[ "Bo Y. -C. Ning" ]
math.ST
[ "math.ST", "stat.TH" ]
EB multiple testing for sparse binary sequences University of California, Davis University of California, Davis Department of Statistics 1227 Mathematical Science Building One Shields Avenue, Davis, CA 95616 e1 Ning This paper investigates the multiple testing problem for high-dimensional sparse binary sequences motivated by the crowdsourcing problem in machine learning. We adopt an empirical Bayes approach to estimate possibly sparse sequences with Bernoulli noises. We found a surprising result that the hard thresholding rule deduced from the spike-and-slab posterior is not optimal, even using a uniform prior. Two approaches are then proposed to calibrate the posterior for achieving the optimal signal detection boundary, and two multiple testing procedures are constructed based on these calibrated posteriors. Sharp frequentist theoretical results for these procedures are obtained, showing both can effectively control the false discovery rate uniformly for signals under a sparsity assumption. Numerical experiments are conducted to validate our theory in finite samples. [class=MSC] [Primary ]62G10, 62G20 Crowdsourcing empirical Bayes false discovery rate sparse binary sequences spike-and-slab posterior multiple testing § INTRODUCTION Large-scale multiple testing problems arise frequently in modern statistical applications, such as in microarray experiments, where thousands hypotheses are tested simultaneously to identify important genes associated with certain types of cancer. Due to the large number of hypotheses compared to the sample size, sparsity assumptions are commonly adopted in order to control the false discovery rate (FDR). The connection between FDR control procedures and their adaptability to sparsity was formally established by the pioneering work <cit.>. They showed that the hard thresholding rule derived from the Benjamini-Hochberg (BH) procedure <cit.> can be asymptotic minimax optimal with an appropriately chosen FDR control parameter. Subsequent studies showed that minimax optimal sparsity-inducing methods, e.g., SLOPE <cit.>, GAP <cit.>, and empirical Bayes methods <cit.>, can be also used to construct multiple testing procedures with excellent FDR control. In this paper, we focus on the empirical Bayes method for multiple testing. The two-groups model has been widely adopted in the Bayesian literature for multiple testing <cit.>. Within this framework, the spike-and-slab model emerges as a natural choice when incorporating a sparsity assumption. This model introduces sparsity through a spike-and-slab prior, which is a mixture of two densities representing the null hypothesis and the alternative hypothesis respectively. The weight assigned to each density measures the probability for accepting either the null or the alternative hypothesis. While extensive research has been conducted over the past two decades to investigate the theoretical properties of various spike-and-slab models <cit.>, fewer studies have focused on their application to the multiple testing problem. provided the first theoretical results for the multiple testing procedure using an empirical Bayes approach in the context of the Gaussian sequence model. Their study confirms it has an excellent performance in controlling the false discovery rate (FDR) under sparsity. A recent work by <cit.> have extended those findings to accommodate models with sub-Gaussian noises. The emergence of applications in machine learning has brought new challenges in multiple testing, for example, the crowdsourcing problem. In this problem, m workers are asked to provide label assignments for n objects. The goal is to recover the true labels for those objects. The crowdsourcing approach to classify unknown labels has been popular across many data science fields. For instance, online commercial platforms such as Amazon Mechanical Turk <cit.> allows paid users to obtain hundreds of thousands of labels within only a few hours from crowdsourcing workers. Another example is the Space Warps project <cit.> initiated in 2013, which engaged 37,000 citizen scientists to participate in classifying 11 million images to identify gravitational lenses over an eight-month period. Analyzing crowdsourcing data presents two significant challenges. First, the number of objects is much larger than the number of workers (n > m). Second, due to workers or citizen scientists often have limited domain expertise, the collected data are very noisy. Let us focus on the case which label assignments are binary, which should already cover a wide range of applications. One can formulate this problem as a multiple testing problem, which involves testing a large number of hypotheses simultaneously for Bernoulli variables. Hence, there is a need to develop new multiple testing methods to accommodate models for sparse binary sequences. Our paper has two main objectives: first, we develop new procedures for multiple testing for high-dimensional binary sequences using empirical Bayes approaches. Second, we establish sharp frequentist theoretical results on FDR control for these procedures. The study of multiple testing problems for binary sequences has received less attention compared to the Gaussian model and other generalized linear models. <cit.> investigated the a related problem testing a global null hypothesis versus sparse alternatives. They found the difference between the signal detection boundary in high-dimensional binary regression model and in the Gaussian model of <cit.> for small m. Recent studies by <cit.> and <cit.> have explored multiple testing methods in the high-dimensional logistic regression model and the binary generalized linear model respectively. To our best knowledge, empirical Bayes approaches have not been explored in these contexts, despite their excellent finite sample performance, e.g., as will demonstrate in our simulation studies in Section <ref>. This paper aims to bridge this gap by proposing practical and theoretically valid multiple testing procedures using empirical Bayes approaches for sparse binary sequences. Let us introduce our model and the multiple testing problem in the following section. §.§ Problem setup The dataset consists of binary outcomes represented by 𝒟 = {Z_ij, i = 1, …, m, j = 1, …, n}. We consider the sequence model with Bernoulli noises given by Z_ijind∼Ber(θ_j), i = 1, …, m, j = 1, …, n, where θ = (θ_1, …, θ_n) is an n-dimensional vector containing unknown parameters, θ∈ [0, 1]^n. The true parameter is denoted as θ_0. We assume θ_0 ∈ℓ_0[s_n] for s_n ≤ n, where ℓ_0[s_n] = {θ∈[0, 1]^n, #{j: θ_j ≠ 1/2}≤ s_n}. Our goal is to test the following n hypotheses simultaneously: H_0j: θ_0,j = 1/2 versus H_1j: θ_0,j≠ 1/2, j = 1, …, n. Since {Z_ij} are independent, let X_j = ∑_i = 1^m Z_ij, then X_j ind∼Bin(m, θ_j). Thus, it is equivalent to the multiple testing problem with binomial distributions. The size of m cannot be too small. This has been proved by <cit.>, which they showed that if m ≪log n, it is impossible to construct any powerful two-sided testing procedures for sparse signals. In this paper, we assume m ≫ (log n)^2, which is slightly stronger than the assumption in <cit.>. We found it is challenging to reduce this assumption due to technical reasons; see discussion in Section <ref>. §.§ The spike-and-slab posterior for Bernoulli variables Consider the spike-and-slab prior given as follows: θ w ∼⊗_j=1^n { (1-w) δ_1/2 + w γ}, where w ∈ (0, 1) is the weight, δ_1/2 is the Dirac measure at 1/2, which is the point of the null hypothesis in (<ref>), and γ is a continuous density symmetric at 1/2. We choose the conjugate prior for γ; i.e., γ = Beta(α, α), α∈ℤ^+. Given the model in (<ref>) and the prior in (<ref>), the posterior distribution is obtained as P^π(θ𝒟, w) = ⊗_j=1^n {ℓ(X_j) δ_1/2 + (1-ℓ(X_j)) 𝒢_X_j}, where 𝒢_X_j = Β(θ_j; X_j + 1, m - X_j + 1) and ℓ(x) = ℓ(x; w) = P^π(θ = 1/2 X = x, w) = (1-w) φ(x)/(1-w) φ(x) + wg(x), with g(x) = ∫φ_θ(x) γ(θ) dθ. We use the notations φ_θ = Bin(m, θ) and φ = φ_1/2 for a binomial distribution. For computational convenience as well as theoretical considerations, we set α = 1. Then, γ∼Unif(0, 1) is the uniform distribution on [0, 1], and g(x) = ∫φ_θ(x) γ(θ) dθ = (m+1)^-1 is simply a constant. We do not recommend using other values for α > 1, as we will explain in the sec:disc section. §.§ The empirical Bayes approach To estimate the unknown weight w in the posterior, we adopt an empirical Bayes approach, which is similar to the one in the Gaussian sequence model introduced by <cit.>. Define the logarithm of the posterior as L(w) = log P^π(θ X, w) = ∑_j = 1^n logφ(X_j) + ∑_j=1^n log (1 +w β(X_j)), where β(u) = (g/φ)(u) - 1. We estimate w by solving the following optimization problem: ŵ = _w ∈ [1/n, 1] L(w), where ŵ is the marginal maximum likelihood estimator (MMLE). The lower bound imposed in (<ref>), specifically 1/n, ensures that ŵ is not too small, which is important for controlling the FDR for small signals near 1/2. It is possible that the solution ŵ lies at the boundary of this interval. One can see this through the score function: S(w) = ∂/∂ w L(w) = ∑_j=1^n β(X_j, w), β(x, w) = β(x)/1+wβ(x). Since β(x, w) is monotone decreasing on x ∈ [m/2, m] due to ϕ(x) is monotone increasing and g(x) is a constant, a unique solution for S(w) = 0 exists if S(1) < 0 and S(1/n) > 0. Otherwise, ŵ = 1/n or 1. Asymptotic analysis for the score function will be provided in Section <ref>. §.§ Empirical Bayes multiple testing methodologies Back to the multiple testing problem, we construct our multiple testing procedure based upon using the empirical Bayes posterior given in Section <ref>. The local FDR, as introduced by Bradley Efron <cit.>, is the posterior probability of the parameter value under the null hypothesis. In our posterior, ℓ(x; w) in (<ref>) is the local FDR. To avoid confusion between FDR and the procedure itself, we refer to the procedure for local FDR as the ℓ-value procedure throughout this paper, the same as in . Theoretical results on FDR control will be presented for the ℓ-value procedure. Here, it is important to distinguish between controlling the Bayes FDR (BFDR) and the FDR itself. The BFDR is the FDR integrated over the prior distribution, defined by BFDR(; w, γ) = ∫_θ∈ [0, 1]^nFDR(θ, ; w, γ) dΠ(θ). While controlling the FDR ensures control of the BFDR, the reverse is not necessarily true. More importantly, controlling the BFDR does not provide information about how the FDR behaves under arbitrary sparsity patterns for θ_0, which is the aim of this paper. We shall focus on the FDR, not the BFDR. The choice of the prior is crucial in controlling the FDR. The resulting posterior should not only be theoretically sound but also can be easily computed. For example, in the Gaussian sequence model, recommend using a heavy-tailed distribution such as Laplace or Cauchy. The computation tool for its posterior was already implemented by <cit.>. For the Bernoulli model, beta priors are convenient to work with. The prior also needs to be `flat' for theoretical considerations, which leads us to choose the uniform prior. However, there is a drawback for choosing this prior, as we discussed below. We then address this issue by proposing two multiple testing procedures using ℓ-values and q-values, introduced in Section <ref>. §.§ Our contribution The present work introduces three easily implementable empirical Bayes multiple testing procedures for high-dimensional sparse binary data: the ℓ-value, ℓ-value and q-value procedures. The latter two procedures in particular exhibit excellent performance in controlling the FDR. Simulation-based results show they even outperform the Benjamini-Hochberg procedure for sparse signals. Frequentist theoretical results are established for the three procedures, including: * We establish the sharp signal detection boundary for a class of thresholding-based procedures and show the threshold of our empirical Bayes posterior is larger than this boundary. Specifically, the sharp boundary is √(1/2mlog (n/s_n)) and the threshold of the posterior is √(1/2m(log(n/s_n) + log(√(2)(m+1)/√(mπ)))). Note that the assumption m ≫ (log n)^2 is needed for getting these results. * This first point highlights the major difference between the Bernoulli sequence model and the Gaussian sequence model: the posterior for the Bernoulli sequence model is not an ideal object for multiple testing. Additional calibration is necessary for the posterior to achieve the sharp signal detection boundary. So we propose the ℓ-value and q-value procedures. * We establish an upper bound for the FDR for all three procedures. All of them allow correct uniform FDR control for arbitrary sparse signals. The q-value procedure achieves the exact targeted level of FDR control for signals above its threshold. The ℓ-value procedure is the most conservative one; it fails to control FNR for even moderate-sized signals. Our proof techniques are inspired by those from <cit.> and , which studies the MMLE and empirical Bayes multiple testing procedures, respectively, in the context of the Gaussian sequence model. However, due to our model is Bernoulli instead of Gaussian, substantial modifications of their arguments are needed. We would like to highlight two major challenges. First, obtaining tight bounds for related quantities are more difficult. For instance, we need to carefully control the remainder term for approximating the Binomial distribution with a Gaussian distribution. Working with, e.g., the Chernoff bound, is not enough to derive our results. Another example is when bounding the ratio between two binomial distributions with different parameter values, one needs to deal with the difference of their means as well as variances. We thus need to use more sophisticated arguments to bound quantities related to the binomial distribution. The bounds we used for binomial distributions are presented in Section <ref>, and some of them may be of independent interest. Second, the key step in obtaining the uniform FDR control result is to ensure certain small signals do not dominate in the upper bound of the FDR. In the Gaussian sequence model, small signals are those below the sharp signal detection boundary . However, in our model, as the threshold is larger than the sharp boundary, we also need to consider signals between the boundary and the threshold in order to control the FDR uniformly. §.§ Outline of this paper The paper proceeds as follows: Section <ref> introduces the ℓ-value procedure. We will show its threshold is suboptimal. Section <ref> explores two approaches to calibrate the empirical Bayes posterior and introduces the ℓ-value and q-value procedures with their thresholds are studied and compared. Section <ref> presents the first main result, showing that all three procedures can achieve uniform FDR control for arbitrary sparse signals. Section <ref> provides the second main result, showing that the q-value procedure can achieve the exact targeted level asymptotically, while the ℓ-value procedure fails to control False Negative Rate (FNR) for certain large signals. Numerical experiments are conducted in Section <ref>. Section <ref> applies our theory to the two-class crowdsourcing problem. The paper concludes with a discussion in Section <ref>. Proofs of lemmas and theorems are provided in Sections <ref>-<ref>, and auxiliary lemmas are given in Sections <ref> and <ref>. §.§ Notations Define φ_θ(x) = Bin(m, θ), the binomial distribution with parameters m and θ. If θ = 1/2, we denote φ(x) = φ_1/2(x). Let _θ(u) be the cdf of φ_θ(x) and (u) be the cdf fo φ(x). Also, denote ϕ(x) as the standard normal distribution with its cdf Φ(x). For a cdf function F(x), let F̅(x) = 1 - F(x). For any two real numbers a and b, let a ∨ b = max{a, b} and a ∧ b = min{a, b}. Denote a ≲ b as a ≤ C b for some constant C. For two sequences c_n and d_n depending on n, c_n ≪ d_n stands for c_n/d_n → 0 as n →∞, c_n ≍ d_n means that there exists constants A, A' > 0 such that A c_n ≤ d_n ≤ A'c_n, and c_n ∼ d_n stands for c_n - d_n = o(c_n), o(1) is a deterministic sequence going to 0 with n. 1{·} is the indicator function. § THE ℓ-VALUE PROCEDURE The ℓ-value, also known as the local false discovery rate introduced by <cit.>, is defined as the probability that the null hypothesis is true conditionally on the test statistics equals to the observed value. In our model, the ℓ-value is defined as ℓ(x) = ℓ(x; w) in (<ref>). The multiple testing procedure based on using ℓ-values involves three steps: first, estimate the MMLE ŵ by solving (<ref>). Second, compute ℓ̂(x) = ℓ̂(x; ŵ) by substituting ŵ obtained in the previous step. Last, determine a cutoff value t ∈ (0, 1) and reject or accept a null hypothesis based on whether ℓ̂(x) ≤ t or ℓ̂(x) > t. We provide a summary of this procedure in Table <ref>. .5 §.§ Analyzing the threshold of the ℓ-value procedure The ℓ-value procedure is a thresholding-based procedure. The threshold is determined by the spike-and-slab posterior and is sensitive to the choice of the prior. In the Gaussian sequence model, the thresholding rule of the posterior distribution has been investigated by <cit.>. In this section, we study the threshold of the ℓ-value procedure. Lemma <ref> provides the formula for this threshold derived from the posterior distribution given in (<ref>). For t ∈ (0, 1) is fixed and w ∈ (0, 1) and define r(w,t) = wt/(1-w)(1-t), consider the test function ^ℓ = 1{ℓ(x; w, g) ≤ t} for the ℓ-value given in (<ref>), then, ^ℓ = 1{|x - m/2| ≥ mt^ℓ_n}, where for η^ℓ(·) = 1/m(φ/g)^-1(·), we have t^ℓ_n := t^ℓ_n ( w, t) = η^ℓ ( r( w, t)) - 1/2. Lemma <ref> shows that the inverse of the ratio (φ/g)(·) is the key quantity in determining the threshold. In Lemma <ref>, we present an asymptotic bound for t_n^ℓ. Its non-asymptotic bounds are given in Section <ref>. For t_n^ℓ(w, t) given in Lemma <ref>, let r(w,t) = wt/(1-w)(1-t) for any w ∈ (0, 1) and a fixed t ∈ (0, 1), if log^2 (1/(r(w, t)))/m → 0 as m →∞, then t_n^ℓ(w, t) ∼√(1/2m(log(1/r(w,t)) + log( √(2)(1+m)/√(π m)))). The bound for t^ℓ_n in (<ref>) indicates t_n^ℓ(w, t) ≥√(1/2mlog(1/r(w,t))). If w ≍ s_n/n, which is the case for the MMLE when a number of strong signals are present (see the proof in Section <ref>), then t_n^ℓ(w, t) ≥√(1/2mlog(n/s_n)). This suggests that t^ℓ_n is larger than the sharp signal detection boundary, as given in Proposition <ref>. Furthermore, if √(m)≫ n/s_n, then t_n^ℓ(w, t) ∼√(1/4mlog m), which can even be independent of s_n. This suggests that the thresholding rule of the ℓ-value procedure is not optimal. The result in Lemma <ref> highlights the key difference between the threshold derived from the spike-and-slab posterior in the Gaussian setting and that in the Bernoulli setting. In the Gaussian sequence model, the threshold of the posterior is already asymptotically sharp when adopting a heavy-tailed distribution such as Laplace or Cauchy. However, this is not the case in the Bernoulli model. In order to achieve the optimal signal detection boundary provided in the subsequent section, a careful calibration of this posterior is needed. We will introduce two calibration approaches in Section <ref>. §.§ The sharp signal detection boundary Assuming s_n ≤ n^v_1 for some v_1 ∈ (0,1) and m ≫ (log n)^2, let us define ζ(w) = √(1/2mlog(1/w)). and Θ_0[s_n, a], a > 0, a set containing `large' signals such that Θ_0[s_n, a] = {θ∈ℓ_0[s_n]: |θ_j - 1/2| ≥ a ζ(s_n/n), j ∈ S_θ, |S_θ| = s_n }, We consider a large class of thresholding-based multiple testing procedures denoted by 𝒯. For any test ∈𝒯, = {_1, …, _n}, each _j, 1 ≤ j ≤ n, has the form _j(X) = {X - m/2 ≥ m τ_1(X) or X - m/2 ≤ -mτ_2(X)}, for some measurable functions τ_1(X) and τ_2(X). Clearly, ^ℓ∈𝒯. Define the multiple testing risk as the summation of the false discovery rate and the false negative rate (FNR) given by ℜ(θ, ) = (θ, ) + (θ, ), where for = {_1, …, _n}, (θ, ) = ∑_j=1^n 1{θ_j = 1/2}_j/1 ∨∑_j=1^n _j, (θ, ) = ∑_j=1^n 1{θ_j ≠ 1/2}(1-_j)/1 ∨∑_j=1^n 1{θ_j = 1/2}. The next result establishes a lower bound for the risk defined in (<ref>) among a class of thresholding-based procedures of the form (<ref>), which in turn gives the sharp signal detection boundary for our model. Let 𝒯 be a class of thresholding-based multiple testing procedures given in (<ref>). Suppose s_n ≤ n^v_1 for any v_1 ∈ (0, 1), m ≫ (log n)^2, and θ_0 ∈Θ_0[s_n, a] in (<ref>), then for any ∈𝒯 and any positive a < 1, lim inf_n →∞inf_∈𝒯inf_θ_0 ∈Θ_0[s_n, a]ℜ(θ_0, ) ≥ 1. The assumption m ≫ (log n)^2 is needed for proving Proposition <ref>. It might be possible that this assumption could be weakened to m ≫log n, as seen in <cit.> for testing a global null hypothesis versus sparse alternatives. However, we found it is challenging to remove this assumption based on our current proof technique, as it naturally arises when controlling the approximation error between the Binomial distribution and the Gaussian distribution. Despite this, we find that working with this assumption is sufficient for studying our multiple testing procedures. The result in Proposition <ref> shows the sharp signal detection boundary of the multiple testing problem is ζ(s_n/n). The proof can be found in Section <ref>. To prove this proposition, we derived bounds for the inverse of the cdf of the binomial distribution in Lemma <ref>. Since we did not find them elsewhere in the literature, they could be of independent interest for studying other models that relate to the binomial distribution. § THE ℓ-VALUE AND Q-VALUE PROCEDURES In the previous section, we showed that the threshold of the ℓ-value is suboptimal. In this section, we introduce two calibration approaches to adjust the posterior distribution for achieving the sharp signal detection boundary. This leads to two new multiple testing procedures, the ℓ-value and q-value procedures. Let us introduce each of these procedures separately. §.§ Multiple testing procedures based on ℓ- and q-values The ℓ-value replaces the function g(u) in the posterior in (<ref>) with √(2/(π m))(1+m) g(x), which is defined as ℓ(x; w, g) = (1-w) φ(x)/(1-w)φ(x) + w √(2/π m)(1+m) g(x). By multiplying this factor with g(x), we effectively remove the second term in (<ref>), as demonstrated in Lemma <ref>. The q-value, introduced by <cit.>, is defined as the probability that the null hypothesis is true conditionally on the test statistics being larger than the observed value. Let Y = X - m/2 and y = x - m/2, the q-value is defined as follows: q(x; w) = P^π(θ = 1/2 |Y| ≥ |y|, w) = (1-w) 𝐁̅(m/2 + |y|)/(1-w) 𝐁̅(m/2 + |y|) + w (m/2 + |y|), where (·) and (·) are one minus the cdf of the densities φ(·) and g(·) respectively. As g(·) = (m+1)^-1, (m/2 + |y|) = (m/2 - |y|)/(1+m). By comparing (<ref>) with (<ref>), the q-value alters the significance region in the original posterior. The construction of the ℓ-value and q-value procedures is similar to the ℓ-value procedure. One needs to replace ℓ(x; w, g) in the test function with ℓ(x; w, g) in (<ref>) and q(x; w) in (<ref>) respectively. It is important to note that w is still chosen as the MMLE ŵ estimated using the original posterior distribution. The two procedures are summarized in Table <ref>. §.§ Analyzing thresholds of ℓ- and q-value procedures We study the thresholds for both ℓ-value and q-value procedures in this section. We also only give the asymptotic results, while non-asymptotic bounds are given in Section <ref>. For a fixed t ∈ (0, 1) and w ∈ (0, 1) and define r(w, t) = wt/(1-w)(1-t), (a) let ^ℓ = 1{ℓ(x; w, g) ≤ t} be the test function based on the ℓ-value given in (<ref>), then ^ℓ = 1{|x - m/2| ≥ m t_n^ℓ} with t_n^ℓ = t_n^ℓ(w, t) = η^ℓ(r(w, t)) - 1/2, where η^ℓ(u) = 1/m (φ/g)^-1(√(2)(1+m) u/√(π m)), if log^2(1/r(w,t))/m → 0 as m →∞, then t_n^ℓ(w,t) ∼√(log (1/r(w,t))/2m). (b) let ^q = 1{q(x; w, g) ≤ t} be the test function based on the q-value given in (<ref>), then ^q = 1{|x - m/2| ≥ mt^q_n} with t^q_n := t^q_n(w, t) = η^q (r(w, t)) - 1/2, where η^q(·) = 1/m(𝐁̅/)^-1(·), if m →∞, then t^q_n(w,t) ∼√(log (1/r(w,t)) - log√(log (1/r(w,t)))/2m). For sufficiently large m, both t_n^ℓ and t_n^q are of the order √(log(1/r(w,t))/(2m)), which is smaller than t_n^ℓ. When w is replaced with ŵ when ŵ≍ s_n/n. then, for a sufficient large m, both procedures achieve the optimal signal detection boundary ζ(n/s_n) asymptotically. In addition, we observe that t_n^q is slightly smaller than t_n^ℓ. This holds true even for a small m, as shown in Lemma <ref>. Consider the three thresholds η^ℓ(u), η^ℓ(u), and η^q(u) given in (<ref>), (<ref>), and (<ref>) respectively and m ≥ 8, then for any u ∈ (0, 1), η^q(u) ≤η^ℓ(u) ≤η^ℓ(u). Lemmas <ref> and <ref> together provide asymptotic bounds for the three thresholds, indicating the relation η^ℓ(u) ≥η^ℓ(u) ≥η^q(u) for any u ∈ (0, 1). It is prudent to ask whether the same relation holds for small m. We plot the three thresholds in Figure <ref> with m=30 (left) and m = 1000 (right) to confirm this relation. We observe that as m increases, η^q(u) converges to η^ℓ(u) for all u ∈ (0, 1), but η^ℓ(u) remains to be large. This observation aligns with the results derived in these lemmas. In Lemma <ref>, we rigorously prove that the same relation holds for any m ≥ 8. Surprisingly, we found that proving the relation between η^q(u) and η^ℓ(u) is nontrivial for small m. We have to use a quite different argument other than simply bounding the difference between the two quantities. Details of the proof can be found in Section <ref>. § UNIFORM FDR CONTROL FOR ℓ-, ℓ-, AND Q-VALUE PROCEDURES With the thresholds of the three procedures are analyzed, we are ready to present the first main result in this section. In the next theorem, we study the frequentist properties of the ℓ-, and q-value procedure and show they allow uniform control for the FDR under the sparsity assumption, i.e., θ_0 ∈ℓ_0[s_n] for any s_n ≤ n^v_1 with v_1 ∈ (0, 1). For the ℓ-value and q-value defined in (<ref>) and (<ref>) respectively, with w = ŵ in (<ref>), consider the parameter space ℓ_0[s_n] in (<ref>) with s_n ≤ n^v_1 for any v_1 ∈ (0, 1), assuming m ≫ (log n)^2, then, there exists a constant K depends on v_1 such that for any t ≤ 4/5 and a sufficiently large n, sup_θ_0 ∈ℓ_0[s_n](θ_0, ^ℓ) ≤K t loglog n/√(log n), sup_θ_0 ∈ℓ_0[s_n](θ_0, ^q) ≤ K t log (1/t). The log(1/t) term in the upper bound of the q-value can be removed if modifying Step q3 in Table <ref> by replacing _j^q with _j^q1{ŵ > w_n} for w_n = log n/n. The proof is similar to that of that of Theorem 2 in . The proof of Theorem <ref> can be found in Section <ref>. Our proof is inspired by the approach used in for studying the Gaussian sequence model. The key step is to establish a tight concentration bound for the MMLE ŵ by considering the two cases separately depending on whether (<ref>) has a unique solution or not. This allows us to bound the FDR by replacing ŵ (a random quantity) with deterministic upper and lower bounds depending on n. To establish such a bound for ŵ, we need to analyze a number of quantities related to the score function and show certain small signals do not dominate in bounding these quantities. While the upper bounds for our ℓ-value and q-value procedures are similar to those of the ℓ-value (with a Laplace prior) and q-value procedures in the Gaussian sequence model (as shown in Theorems 1 & 2 of ), there are three major differences in obtaining our results: * Working with the binomial distribution requires controlling the approximation error between a binomial distribution and a Gaussian distribution. Bounding this approximation error is more tedious due to it has a complicated expression. * In the Gaussian sequence model, the null and alternative distributions only differ in their means, while their variances remain the same. However, in the Bernoulli sequence model, both the mean and variance change from the null hypothesis to the alternative. This requires us to apply different arguments for study the ratio between two binomial distributions, which leads to different bounds for certain quantities related to the score function when analyzing ŵ (see discussions in Section <ref>). * The suboptimality of the threshold of the posterior is not present in the study of the Gaussian sequence model. A consequence of working with a suboptimal threshold is that we need to show not only those signals below the sharp signal detection boundary but also larger signals between that boundary and the threshold do not dominate in bounding certain quantities for studying ŵ. Regarding the ℓ-value procedure, the following lemma shows that it is also possible to achieve a uniform control for the FDR. This is not surprising, as intuitively, using a larger threshold will make fewer mistakes and therefore a smaller FDR. This can also be explained by examining our proof. Once a tight concentration bound for ŵ is established, one can follow a similar argument to derive the uniform FDR control result for any threshold larger than the sharp signal detection boundary. The proof for the ℓ-value procedure is straightforward by following a similar proof of Theorem <ref>. The next lemma shows that its upper bound is smaller than the other two procedures. For the ℓ-value given in (<ref>) with w = ŵ given in (<ref>), under the same condition as in Theorem <ref>, there exists a constant K' depends on v_1 such that for any t ≤ 4/5 and a sufficiently large n, sup_θ_0 ∈ℓ_0[s_n](θ_0, ^ℓ) ≤K' t (log m + loglog n)/√(m log n). By comparing the upper bound in Lemma <ref> with those in Theorem <ref>, we observe that the ℓ-value procedure tends to be overly conservative. This observation is further supported by the simulation results in Section <ref>. The price the ℓ-value procedure needs to pay for being too conservative is that it fails to effectively control the false negative rate (FNR) even for moderately large signals, as we will show in the next section. § FDR AND FNR CONTROL FOR LARGE SIGNALS In this section, we present the second main result of our paper. We focus on a class of `large' signals above the boundary ζ_n(n/s_n). By considering only these signals, we are able to obtain sharper bounds for the FDR, such as the q-value procedure can control FDR at an arbitrary target level t. We also provide results on controlling the FNR and will show the negative result for the ℓ-value procedure. Before presenting our results, let us denote Θ_0[s_n] = Θ_0[s_n, a] with a = 1, where Θ_0[s_n, a] is defined in (<ref>). It is worth noting that although we assume a = 1, our results hold for any a > 1. We begin with presenting the FDR result for the q-value procedure. For the q-value given in (<ref>) with w = ŵ in (<ref>), suppose s_n ≤ n^v_1 for any v_1 ∈ (0, 1) and m ≫ (log n)^2, then for any fixed t ∈ (0, 1), lim_n →∞sup_θ_0 ∈Θ_0[s_n](θ_0, ^q) = lim_n →∞inf_θ_0 ∈Θ_0[s_n](θ_0, ^q) = t. Next, we present the upper bound for the FNR for the q-value and ℓ-value procedures. Let ŵ be the MMLE given in (<ref>) and a fixed t ∈ (0, 1), if s_n ≤ n^v_1 for any v_1 ∈ (0, 1) and m ≫ (log n)^2, then, as n→∞, (i) for the ℓ-value given in (<ref>) with w = ŵ, sup_θ_0 ∈Θ_0[s_n](θ_0, ^ℓ) → 0, (ii) for the q-value given in (<ref>) with w = ŵ, sup_θ_0 ∈Θ_0[s_n](θ_0, ^q) → 0. The results in the above theorem and those in the previous sections on uniform FDR control together imply the upper bound for the multiple testing risk of the two procedures. Let ŵ be the MMLE given in (<ref>), t ∈ (0, 1) is fixed, and ℜ the multiple testing risk defined in (<ref>), if s_n ≤ n^v_1 for any v_1 ∈ (0, 1) and m ≫ (log n)^2, then, for θ_0 ∈Θ_0[s_n], the ℓ-value and the q-value given in (<ref>) and (<ref>) with w = ŵ, as n →∞, sup_θ_0 ∈Θ_0[s_n]ℜ(θ_0, ^q) → t, sup_θ_0 ∈Θ_0[s_n]ℜ(θ_0, ^ℓ) → 0. Last, we show the ℓ-value procedure is unable to control the FNR even for signals exceed the optimal boundary. Let ŵ be the MMLE in (<ref>) and t ∈ (0,1) be a fixed value, if s_n ≤ n^v_1 and m ≫ (log n)^2 for any v_1 ∈ (0, 1), then for θ_0 ∈Θ_0[s_n] and the ℓ-value in (<ref>) with w = ŵ, as n →∞, sup_θ_0 ∈Θ_0[s_n](θ_0, ^ℓ) → 1. For signals above the threshold √(1/2m(log(n/s_n) + log (√(m)) )), by following the proof of Theorem <ref>, one can easily show the ℓ-value procedure is able to effectively control the FNR for these signals. § NUMERICAL EXPERIMENTS In this section, we conduct numerical experiments to compare the finite sample performance of the ℓ-, q-, and ℓ-value procedures. We also compare them with the BH procedure under different choices of m and s_n. Data are generated as follows: for a fixed value of θ_0, s_n, and m, we generate s_n independent samples from the binomial distribution Bin(m, θ_0) and n - s_n independent samples from Bin(m, 1/2). The size of the dataset is n × m. We apply each multiple testing procedure and calculate the false discovery rate at three different significance levels, t = 0.05, t = 0.1, and t = 0.2. The weight parameter w is estimated using the `optim' function in 𝖱. Last, we repeat each experiment 10,000 times and report the average FDR value. In each experiment, we set n = 10,000. Three different values for s_n are considered: 0.001, 0.1, and 0.5, representing super-sparse, sparse, and dense scenarios and three different values for m are chosen: log^2 n ≈ 85, 200, and 1000. We present two figures in this section, each consisting of nine subplots representing different combinations of m and s_n. Figure <ref> plots the FDR of q-value and ℓ-value procedures. The three solid lines in each subplot represent the FDR of the q-value procedure at three significant levels t=0.2 (red), t = 0.1 (blue), and t = 0.05 (green). The three dashed lines represent the FDR of ℓ-value procedure with the same three levels. Figure <ref> plots the FDR of the BH (red), q- (blue), ℓ- (green), and ℓ-value (yellow) procedures. The significant level is chosen to be t=0.1. We begin by comparing the FDR estimates obtained from the q-value procedure to those from the ℓ-value procedure. Figure <ref> confirms the relation (θ_0, ^ℓ) < (θ_0, ^q) in Theorem <ref>. We found that the q-value procedure can largely overestimate the FDR in the dense case particularly when m is small. On the other hand, the ℓ-value procedure consistently keeps the FDR below the targeted level across all nine scenarios. We also found that when θ_0 is close to 0.5, the q-value procedure can significantly overestimate the FDR in sparse scenarios. In addition, we observe a bump in the FDR estimates for intermediate signals. The depth of the bump increases with larger values of s_n or m. This is phenomenon has not been found in the Gaussian sequence model, suggesting the differences in finite sample performance between the two models. Moreover, we observe that even when m = 85, both procedures can provide still reasonable estimates for the FDR, especially in sparse cases. This finding suggests the possibility to relax the assumption m ≫ (log n)^2 . Regarding the ℓ-value procedure, it provides the smallest FDR among all three procedures. This finding is not surprising, as the result in Lemma <ref> demonstrate that the upper bound for the ℓ-value procedure is much smaller than that of the other two procedures. Indeed, we found the FDRs of the ℓ-value procedure are consistently close to 0 across all nine scenarios. In Figure <ref>, we also present the results obtained using the BH procedure. The BH procedure is a classical method in multiple testing. To conduct this procedure takes three steps: first, one evaluates the p-value for each individual test. Second, sorting these p-values in a descending order and calculating the critical value for each p-value. Last, one rejects the null hypothesis if the corresponding p-value is smaller than its critical value. We compare our three multiple testing procedures with the BH procedure and found that it tends to overestimate the FDR when signals are sparse and underestimate the FDR when signals are dense. On the other hand, both the q-value and ℓ-value procedures are more stable in controlling the FDR across all nine scenarios. The ℓ-value procedure remains to be too conservative compared to the BH procedure. We thus do not recommend using the ℓ-value procedure in practice. § THE TWO-CLASS CROWDSOURCING PROBLEM WITH SPARSITY CONSTRAINTS Consider the two-class crowdsourcing problem discussed in sec:intro. This problem has been studied by several authors under various assumptions. Most notably, <cit.> obtained a sharp bound for this problem without sparsity assumption, and <cit.> assumes the sparsity assumption without considering the replication due to multiple works. We consider there are multiple workers classify n objects, and n ≫ m. The number of workers working on each object is allowed to be uneven. We denote m_j to be the number of works for classifying the j-th object. The model can be written as follows: one observes X_j ind∼Bin(m_j, θ_j), with m_1, …, m_j ≤ m and j = 1, …, n, and the goal is to test H_0j: θ_0,j = 1/2 versus H_1j: θ_0,j≠ 1/2. The logarithm of the posterior distribution becomes L^C(w) = ∑_j=1^n logφ_j(X_j) + ∑_j=1^n log (1+w β_j(X_j)), where β_j(u) = (g_j/φ_j)(u) - 1, g_j = (1+m_j)^-1 and φ_j(u) = Bin(u; m_j, 1/2). The score function now become S^C(w) = ∑_j = 1^n β_j(X_j, w), β_j(x, w) = β_j(x)/1 + wβ_j(x), and the MMLE becomes ŵ^C = _w ∈ [1/n, 1] L^C(w), The bound for ŵ^C is essentially the same as it of the MMLE ŵ in (<ref>) if min_1 ≤ j ≤ n m_j ≫ (log n)^2. Consider the ℓ-value given in (<ref>) and the q-value given in (<ref>) with w = ŵ^C, suppose θ∈ℓ_0[s_n] in (<ref>) and assume s_n ≤ n^v_1 for any v_1 ∈ (0, 1) and min_1≤ j≤ nm_j ≫ (log n)^2, then there exists a constant K depends on v_1 such that for any t ≤ 4/5 and a sufficiently large n, sup_θ_0 ∈ℓ_0[s_n](θ_0, ^ℓ) ≤K t loglog n/√(log n) and sup_θ_0 ∈ℓ_0[s_n](θ_0, ^q) ≤ K t log (1/t). § DISCUSSION We introduced three empirical Bayes multiple testing for sparse binary sequences, the ℓ-value, q-value, and ℓ-value procedures, and provide in depth frequentist theoretical analysis for these procedures. We found that the q-value and ℓ-value procedures achieve excellent control for the FDR, while the ℓ-value procedure is too conservative. These results were verified through simulation studies. The most challenging part in our analysis is to obtain a tight concentration bound for the MMLE ŵ. which we derived sharp results for bounding a number of quantities related to the binomial distribution under both the null hypothesis and the alternative hypothesis. These results eventually lead us to obtain the uniform FDR control results for our procedures. Regarding the prior, we choose uniform throughout our paper. Now considering other priors such as Beta(α, α) with α > 1, a simply calculation reveals that g(x, α) = ∫_θφ_θ(x) Beta(θ; α, α) dθ = m xΓ(2α)Γ(x + α) Γ(m-x+α)/(Γ(α))^2Γ(m + 2α). Clearly, g(x, α) is a nonlinear function of x. Using the well known approximation for the gamma function Γ(z+a) ∼Γ(z) z^a for any fixed a as z →∞, we obtain g(x, α) ∼( x(m-x)/m^2)^α - 1Γ(2α)/m(Γ(α))^2, which in general does not close to √(2/π m), the multiplying factor for calibrating the posterior. Thus, choosing other values for α in the prior will not resolve the issue encountered with the ℓ-value procedure. In sum, the present work serves as an initial step for exploring multiple testing procedures on discrete outcomes data using empirical Bayes approaches. Several exciting directions are worth exploring, including extending the current methodology to handle one-sided tests, instead of the two-sided tests as we considered here, and developing new multiple testing approaches for more advanced models such as the sparse binary regression model and the Ising model using empirical Bayes <cit.>. Given that Bayesian approaches are routinely used for uncertainty quantification, another interesting direction is to study the frequentist coverage of credible sets from the posterior distribution. Typically, a self-similarity or excessive bias restriction-type condition is needed for achieving a correct coverage <cit.>. However, since the thresholding rule from our posterior is not optimal, it is unclear whether such a type of condition would be sufficient. In addition, nonasymptotic bounds for the minimax risk under various loss functions, e.g., the expected Hamming loss in <cit.>, for this model have not been studied thoroughly, which has a potential for valuable contributions in future studies. § A LIST OF USEFUL MULTIPLE-TESTING RELATED QUANTITIES Before proving any results, we list several quantities that will be frequently used in our proofs: * The posterior distribution is given by P^π(θ X, w) = ⊗_j=1^n {ℓ_j(X_j; w) δ_1/2 + (1-ℓ_j(X_j; w)) 𝒢_X_j}, where 𝒢_x = Β(θ; x + 1, m-x + 1), φ(x) = Bin(x, m/2), g(x) = (m+1)^-1, and ℓ(x; w) = P^π(θ = 1/2 X, w) = (1 - w) φ(x) / (1 - w) φ(x) + w g(x) . * The log-marginal density of w is given by ℒ(w X) = ∑_j=1^n (logφ(X_j) + log (1 + wβ(X_j))) and the score function is given by S(w) = ∑_j=1^n β(X_j, w), β(x, w) = β(x)/1 + w β(x). * Thresholds of ℓ-, q-, and ℓ-value procedures are defined as t_n^ℓ(w, t) = η^ℓ(r(w,t)) - 1/2, t_n^ℓ(w, t) = η^ℓ(r(w,t)) - 1/2, and t_n^q(w, t) = η^q(r(w,t)) - 1/2, respectively, where η^ℓ(u) = 1/m(φ/g)^-1(u), η^ℓ(u) = 1/m(φ/g)^-1(√(2)(1+m) u/√(π m)), η^q(u) = 1/m(/)^-1(u). These quantities will be studied in Section <ref>. * Taking the expectation for the score function, we denote w^⋆ be the solution of 𝔼_0 S(w^⋆) = 0, which can be also written as (n - s_n) m̃(w^⋆) = ∑_j: θ_0,j≠ 1/2 m_1(θ_0,j, w^⋆), where m̃(w) = - 𝔼_0 β(x, w) = - ∑_u=0^m β(x, w) φ(x), m_1(θ, w) = 𝔼_θβ(x, w) = ∑_u=0^m β(x, w) φ_θ(x), m_2(θ, w) = 𝔼_θβ(x, w)^2 = ∑_u=0^m β(x, w)^2 φ_θ(x). The above three quantities will be studied in Section <ref>. They play an important role in bounding the MMLE ŵ, as we will show that ŵ is close to w^⋆ when there is enough signal. * Three boundary values will be frequently used in our proofs: ζ_n(w), ξ_n(w), and ν_n(w). Specifically, ζ_n(w) = √(1/2mlog(1/w)), ξ_n(w) is the solution of β(m/2 + mξ(w)) = 1/w, and ν_n(w) is the solution of β(m/2 + mν_n(w)) = 0. The last two quantities will be studied in Section <ref>. * The number of false and true discoveries for a sequence of tests = (_1, …, _n) are defined by FD_(t, w) = ∑_j: θ_0,j = 1/2_j(t, w), TD_(t, w) = ∑_j: θ_0,j≠ 1/2_j(t, w). We also recall the definition of the false discovery rate (FDR) and the false negative rate (FNR) given by (θ, ) = ∑_j=1^n 1{θ_j = 1/2}_j/1 ∨∑_j=1^n _j, (θ, ) = ∑_j=1^n 1{θ_j ≠ 1/2}(1-_j)/1 ∨∑_j=1^n 1{θ_j = 1/2}. § RELATIONS BETWEEN Η^ℓ, Η^Q, Η^ℓ, Ζ, AND Ξ In this section, we study the relation between the three thresholds η^ℓ, η^q, η^ℓ with ζ and ξ. We begin by examining the monotonicity of the two functions (φ/g)(·) and (/)(·). Non-asymptotic bounds for the three thresholds will be derived in Section <ref>, and comparisons between these thresholds and ζ, ξ will be made in Section <ref>. The proof of Lemmas <ref>, <ref> and <ref> will be given in Section <ref> and the proof of Lemma <ref> is provided in Section <ref>. Note that we assume that m is even throughout the paper. Our results can be easily extended to the case where m is odd with minimal modifications. §.§ Monotonicity for (φ/g)(·) and (/)(·) For φ(x) = Bin(m, 1/2) and g(x) = ∫φ_θ(x) γ(θ) dθ = (m + 1)^-1, the function (φ/g)(m/2 + |y|) is symmetric at y = 0 and is monotone increasing on y ∈ [-m/2, 0) and monotone decreasing on y ∈ [0, m/2). Since φ(·) is a binomial distribution with parameters m and 1/2. Clearly, it is symmetric at m/2 and is monotone increasing when y ∈ [-m/2, 0) and monotone decreasing on y ∈ [0, m/2). Using the fact that g = (1+m)^-1 is a constant, we complete the proof. Let (x) be the upper tail probability of Bin(m, 1/2) and (x) = (m - x)/(m+1), the function (/)(m/2 + |y|) is symmetric at y = 0 and is monotone increasing on y ∈ [-m/2, 0) and monotone decreasing on y ∈ [0, m/2). To verify (𝐁̅/)(m/2 + |y|) is symmetric at y = 0 is trivial. To show it is monotone decreasing on y ∈ [0, m/2], we plug-in the expressions of 𝐁̅ and and obtain 𝐁̅/(m/2 + y) - 𝐁̅/(m/2 + y + 1) = (m+1) ( ∑_z= m/2 + y^m φ(z)/m/2 - y - ∑_z= m/2 + y +1^m φ(z)/m/2 - y - 1) = (m+1) ( (m/2 - y)φ(m/2 + y) - ∑_z= m/2 + y^m φ(z)/(m/2 - y)(m/2 - y - 1)), which is always positive as φ(z) < φ(m/2 + y) for all z ≥ m/2 + y+1. The proof of the monotone increasing part is similar and hence omitted. §.§ Bounding η^ℓ, η^q, and η^ℓ Define η^ℓ(u) = 1/m(φ/g)^-1(u) for u ∈ (0, 1), let g(u) = (1+m)^-1 and φ(u) = Bin(m, 1/2), if |η^ℓ(u) - 1/2| ≤η_∘ for some u ∈ (0, 1) and η_∘ < 1/2 is a fixed constant, then η^ℓ(u) ≤1/2 + √(1/2m( log (1/u) + log(√(2)(1+m)/√(π m (1-4 η_∘^2))) + 1/12m)), η^ℓ(u) ≥1/2 + √(1/2m( log (1/u) + log(√(2)(1+m)/√(π m (1-4 η_∘^2))) ) ( 1 + 8η_∘^2/3(1-4η_∘^2)^2)^-1). Moreover, if m ≫log^2(1/u) for some u ∈ (0, 1), then η^ℓ(u) ∼1/2 + √(1/2m( log(1/u) + log(√(2)(1+m)/√(π m)) )). The equation η^ℓ(u) = 1/m(φ/g)^-1(u) directly implies 1/u = (g/φ)(m η^ℓ(u)), which further implies m m η^ℓ(u) = 2^m u/m + 1. By Lemma <ref> and let η̃^ℓ(u) = η^ℓ(u) - 1/2, we have m m η^ℓ(u) = √(2) e^- m T(1/2 + η̃^ℓ(u), 1/2) + mlog 2 + ω(m)/√(π m (1-4(η̃^ℓ(u))^2)), where T(a, p) = alog(a/p) + (1-a)log((1-a)/(1-p)) and ω(m) ∼ 1/(12m). By (2) in Lemma <ref> and denote η̃^ℓ := η̃^ℓ(u) for simplicity, one obtains 2 (η̃^ℓ)^2 ≤ T(1/2 + η̃^ℓ, 1/2) ≤ 2(η̃^ℓ)^2 + 8 (η̃^ℓ)^4/3 (1-4(η̃^ℓ)^2). Plugging the last display into (<ref>), we obtain 2m(η̃^ℓ)^2 ≤log (1/u) + log(√(2)(1+m)/√(π m (1-4(η̃^ℓ)^2))) + 1/12m, and 2m(η̃^ℓ)^2(1 + 8(η̃^ℓ)^2/3(1-4(η̃^ℓ)^2)^2) ≥log (1/u) + log(√(2)(1+m)/√(π m(1-4(η̃^ℓ)^2))). The result follows by the assumption η̃^ℓ < η_∘. The second result follows by noting that m ≫log^2(1/u) → 0 implies m(η̃^ℓ)^4 → 0, and thus 1 - 4(η̃^ℓ)^2 ∼ 1 and m(η̃^ℓ)^4/(1-4(η̃^ℓ)^2)^2→ 0. For η^q(u) = 1/m (/)^-1(u) for u ∈ (0, 1), with (u) = 1-(u) and (u) = 1 - (u), (u) and (u) are cdfs of the densities φ(u) and g(u) respectively, if η^q(u) -1/2 ≤η_∘, η_∘ < 1/2 is a fixed constant, we have η^q(u) ≤1/2 + √(log (1/u) + A_1(m, η_∘) - log (√(log(1/u) + A_2(m, η_∘)))/2m), η^q(u) ≥1/2 + √(log (1/u) + A_2(m, η_∘) - log (√(log(1/u) + A_1(m,η_∘)))/2m), where A_1(m, η_∘) = log(1 + 1/m) - log( √(π)(1/2 - η_∘) √(1-4η_∘^2)) and A_2(m, η_∘) = log(1 + 1/m) + log( √(2/π)η_∘) - (12m)^-1. Moreover, if m →∞, then η^q(u) ∼1/2 + √(1/2m(log(1/u) - log√(log(1/u) ))). By definition of η^q(u), 𝐁̅(mη^q(u) ) = u (mη^q(u)). Also, we have (mη^q(u)) = m(1 - η^q(u))/(m+1) and 𝐁̅(mη^q(u)) = ∑_x > mη^q(u)φ(x). Applying Lemma <ref> and then (2) in Lemma <ref>, and denote η̃^q: = η̃^q(u) = η^q(u) - 1/2 for simplicity, then √(2)e^-2m(η̃^q)^2- (12m)^-1/√(π m (1- 4(η̃^q)^2))≤ u (mη̃^q) ≤η^q(u) √(2)e^-2m(η̃^q)^2/2 η̃^q √(π m (1- 4(η̃^q)^2)). The upper bound in the last display implies 2m(η̃^q)^2 + log( √(2m)η̃^q ) ≤log (1/u) + log (1+1/m) + log(1/2 + η̃^q/1/2-η̃^q) - log(√(π (1-4(η̃^q)^2))) ≤log (1/u) + log (1+1/m) - log(√(π) (1/2 - η_∘) √(1-4η_∘^2)), which we used - log (1/2-η̃^q(u)) < - log (1/2 -η_∘), log(1/2 + η̃^q) ≤ 0, and -log(1-4(η̃^q)^2) ≤ - log(1 - 4η_∘^2). The lower bound can be bounded in a similar way. By the lower bound in (<ref>), 2m(η̃^q)^2 + log(√(2m)η̃^q ) ≥log (1/u) + log (1+1/m) + log(√(2/π)η̃^q) -1/12m ≥log (1/u) + log (1+1/m) + log(√(2/π)η_∘) -1/12m By combining the bounds for 2m(η̃^q)^2 + log(√(2m)η̃^q) and dividing 2m on both sides of the inequality, we obtain the the upper and lower bounds for η̃^q. If m →∞, the asymptotic bound follows immediately by noting that log(1 + 1/m) ≪ m, 1/(12m^2) → 0, log(1/u) ≫log√(log(1/u)) for any u ∈ (0, 1), and η_∘ is a fixed constant. Define η^ℓ(u) = 1/m(φ/g)^-1(u √(2(1+m)^2/π m)), u ∈ (0, 1), with g(u) = (1+m)^-1 and φ(u) = Bin(m, 1/2), if η^ℓ(u) - 1/2 ≤η_∘, η_∘ < 1/2 is a fixed constant, then η^ℓ(u) ≤1/2 + √(1/2m(log (1/u) - log (1-4 η_∘^2) + 1/12m)), η^ℓ(u) ≥1/2 + √(1/2m(log (1/u) - log (1-4 η_∘^2)) (1 + 8η_∘^2/3(1-4η_∘^2)^2)^-1). Moreover, if m ≫log^2(1/u), then η^ℓ(u) ∼1/2 + √(1/2mlog(1/u)). The proof is similar to the proof of Lemma <ref>. By replacing u with √(2/π m) (1+m) u in the upper and lower bounds, one immediately obtains the result. §.§ Comparing η^ℓ(r(w,t)), η^q(r(w,t)), η^ℓ(r(w,t)) with ξ(w) and ζ(w) Throughout this section, we define η̃^ℓ(u) = η^ℓ(u) - 1/2, η̃^q(u) = η^q(u) - 1/2, and η̃^ℓ(u) = η^ℓ(u) - 1/2 for η^ℓ, η^q, and η^ℓ defined in (<ref>), (<ref>), and (<ref>) respectively. The following lemmas present bounds for these quantities and establish their relationships with ξ(w) and ζ(w). For η^q(u) given in (<ref>), let u = r(w,t) = wt/(1-w)(1-t) and ζ(w) given in (<ref>), then for any w ≤ w_0(t), w_0(t) is sufficiently small, and a fixed t ∈ (0, 1), there exists a constant η_∘ such that η̃^q ≤η_∘, η_∘≤ 1/2 and C = C(w_0, t, m, η_∘) such that |η̃^q(r(w,t)) - ζ(w)| ≤log((1-t)/t) + C/√(2mlog(1/w)). By the fact that |√(a) - √(b)| = |a-b|/(√(a) + √(b)) for any a, b > 0, |η̃^q(r(w,t)) - ζ(w)| = |(η̃^q(r(w,t)))^2 - ζ^2(w)| /√(2m)(η̃^q(r(w,t)) + ζ(w) ). What remains is to bound |(η̃^q(r(w,t)))^2 - ζ^2(w)|. By Lemma <ref>, for the same A_1, A_2 given in the lemma, let R = (1-w)(1-t)/t ≤ (1-t)/t, if (η̃^q(r(w,t)))^2 ≥ζ^2(w), then 2m(η̃^q(r(w,t)))^2 - 2mζ^2(w) ≤log ((1-t)/t) + A_1 - log(√(log (wt/((1-w)(1-t)) + A_2))). If (η̃^q(r(w,t)))^2 < ζ^2(w), then 2mζ^2(w) - 2m(η̃^q(r(w,t)))^2 ≤ - log((1-t)/t) - log(1-w) - A_2 + log(√(log (wt/((1-w)(1-t)) + A_1))). Using the bound 0 < w≤ w_0, denote C_1 = A_1 - log(√(log (t/(1-t)) - log(1-w_0) + A_2)) and C_2 = A_2 - log(√(log (t/(1-t)) - log(1-w_0) + A_1)) + log(1-w_0) and let C = C_1 ∨ C_2, and then one can bound the denominator from below by √(2m)ζ(w) = √(log(1/w)) to obtain the result. For η^ℓ(u) in (<ref>) and ζ(w) in (<ref>), let u = r(w,t), if w ≤ w_0(t) for a sufficiently small w_0, η_∘ < 1/2, then for K(η_∘) = 8η_∘^2/3(1-4η_∘^2) and fixed t∈ (0, 1) and C(w_0), |η̃^ℓ(r(w,t)) - ζ(w)| ≤log(t(1-t)^-1) + K(η_∘) log(1/w) + C(w_0)/√(2mlog(1/w)). Let R = (1-t)(1-w)/t, then by Lemma <ref>, if 2m(η̃^ℓ(r(w,t)))^2 ≥ 2mζ^2(w), then 2m(η̃^ℓ(r(w,t)))^2 - 2 mζ^2(w) ≤log R - log (1-4η_∘^2) + 1/(12m) := U_1. If 2m(η̃^ℓ(r(w,t)))^2 < 2mζ^2(w), let K(η_∘) = 8η_∘^2/3(1-4η_∘^2)^2 for some fixed η_∘ < 1/2, then 2mζ^2(w) - 2m(η̃^ℓ(r(w,t)) )^2 ≥ - log R + log(1-4η_∘^2) - log (w) K(η_∘) /1+K(η_∘) := -U_2 Using that |√(a) - √(b)| = |a-b|/(√(a) + √(b)) for any a, b > 0, if U_1 ≥ -U_2, then |η̃^ℓ(r(w,t)) - ζ(w) | ≤log R - log (1-4η_∘^2) + 1/(12m)/2m(η̃^ℓ(r(w,t)) + ζ(w)) ≤log (t(1-t)^-1) + log 2 + 1/(12m)/2m(η̃^ℓ(r(w,t)) + ζ(w)). If U_1 < -U_2, then |η̃^ℓ(r(w,t)) - ζ(w) | ≤log R + K(η_∘)log (1/w)/2m(1+K(η_∘))(η̃^ℓ(r(w,t)) + ζ(w)) ≤ - log (1-w_0) + log(t(1-t)^-1) + K(η_∘) log(1/w)/2mη̃^ℓ(r(w,t)) + ζ(w)) By combining the two cases, |η̃^ℓ(r(w,t)) - ζ(w) | ≤log(t(1-t)^-1) + log 2 + K(η_∘) log(1/w) - log(1-w_0)/2mη̃^ℓ(r(w,t)) + ζ(w)) ≤log(t(1-t)^-1) + K(η_∘) log(1/w) + C(w_0)/√(2mlog(1/w)), where C(w_0) = log2 - log(1-w_0). For η^q(u) and η^ℓ(u) given in (<ref>) and (<ref>) respectively, let ξ(w) be the solution of β(u) = 1/w as given in Lemma <ref>, then for a sufficiently large m, η̃^q(r(w,t)) ≤η̃^ℓ(r(w,t)) ≤ξ(w). It is sufficient to prove η̃^ℓ(u) ≤ξ(w), as η̃^q ≤η̃^ℓ by Lemma <ref>. From (<ref>) and let D = (1 + 8ξ_∘^2/3(1-4ξ_∘^2)^2)^-1 for any fixed ξ_∘ < 1/2 and m > M_0 for a sufficiently large M_0, then 2m ξ^2(w) - 2m (η̃^ℓ(r(t, w)))^2 ≥ D [ log(1+1/w) + log(√(2)(1+m)/√(π m)) ] - log (1/r(t,w)) + log(1-4η_∘^2) -1/(12m) ≥ D log (√(2 M_0/π)) + log( (1 + 1/w)^D(1-4η_∘^2)r(t, w) ) - 1/12M_0. Choosing M_0 such that log (√(2 M_0/π)) - (12DM_0)^-1 > - log( (1 + 1/w) [ (1-4η_∘^2)r(t, w)]^1/D), such M_0 always exists for any fixed D, as both w, t ≠ 0 or 1 and η_∘ bounded away from 1/2. Thus, the last line in the last display is positive, which implies η̃^ℓ≤ξ(w). For η^ℓ given in (<ref>) and ξ(w) is the solution for β(u) = 1/w in Lemma <ref>, suppose η̃^ℓ(r(w,t)) ≤η_∘ for a fixed η_∘ <1/2, w ≤ w_0(t), then there exists some constant C > 0 depending on t, η_∘, w_0 such that for all t ∈ (0, 1), |η̃^ℓ(r(w,t)) - ξ(w)| ≤|log(t(1-t)^-1)| + C/2m(ξ(w) + η̃^ℓ(r(w,t)) ) Let us denote η̃^ℓ(r(w,t)) = η^ℓ(r(w,t)) - 1/2. By the upper bound of η^ℓ(·) in Lemma <ref>, we have 2m(η̃^ℓ(r(w,t)))^2 - 2 mξ^2(w) ≤ -log(1-w) - log (1-4η_∘^2) + 1/(12m) + log (t(1-t)^-1) ≤log (t(1-t)^-1) + D_1, where D_1 is a fixed constant, as w ≤ w_0 and η_∘ is smaller than 1/2. On the other hand, using the lower bound of η^ℓ(·) in Lemma <ref>, let D_2 = (1+ 8η_∘^2/3(1-4η_∘^2)^2)^-1, then 2m ξ^2 - 2 m(η̃^ℓ(r(w,t)))^2 ≤ |log(t(1-t)^-1)| + (D_2 - 1) log(wt(1-t)^-1) + D_2 (log(1-w) + log(1-4η_∘^2) - log(√(2)(1+m)/√(π m))) ≤ |log(t(1-t)^-1)| - (D_2 - 1) log(1-t). Since t is a fixed constant between 0 and 1, let D_3 = - (D_2 - 1) log(1-t) > 0. By combining the two upper bounds and letting C = max{D_1, D_3}, using |a - b| = |a^2 - b^2|/a + b for any a, b >0, we obtain the result. §.§ Proof of Lemmas <ref>, <ref> and <ref> We prove Lemmas <ref>, <ref> and <ref> together. For Lemma <ref>, by definition, ℓ(x; w, g) ≤ t ⟺ φ/g(x) ≤ r(w, t). By Lemma <ref>, φ/g(x) is symmetric at x = m/2 and is monotone decreasing on x ∈ [m/2, m] and monotone increasing on x ∈ [0, m/2). Therefore, the last display implies x - m/2 ≥ mη^ℓ(r(w, t)) - m/2 if x ≥ m/2 and m/2 - x ≥ mη^ℓ(r(w, t)) - m/2 if x ≤ m/2. By combining the two cases, we proved the result. The proof of (a) in Lemma <ref> is similar and is omitted. To prove (b), by the definition of the q-value, q(x; w, g) ≤ t ⟺ 𝐁̅/(m/2 + |y|) ≤ r(w, t). By Lemma <ref>, (𝐁̅/)(·) is symmetric at y = 0 and is monotone decreasing on x ∈ [m/2, m] and monotone increasing on x ∈ [0, m/2); therefore, x - m/2 ≥ m t_n^q when x ∈ [m/2, m] and m/2 - x ≥ mt_n^q when x ∈ [0, m/2). §.§ Proof of Lemma <ref> Since φ(v) is a symmetric function at m/2 and is monotone decreasing on [m/2, m], g(v) is a constant, it is easy to verify that (φ/g)^-1(v) is a monotone decreasing function on 0 < v<1/2 and is symmetric at 1/2, which immediately implies η^ℓ(u) < η^ℓ(u) for all u ∈ (0, 1). We can also show η^q(x) < η^ℓ(x). Consider the function f(u) = (^-1(u)) = m - ^-1(u)/1+m, u ∈ (0, 1/2). By calculation f'(u) = (g/φ)(^-1(u)), which is decreasing on (0, 1/2). Thus, f(u) is strictly concave on (0,1/2). Also note that f(0) = 0, by the mean-value theorem, (^-1(u)) ≥ u (g/φ) (^-1(u)). Since for any integer x, m > x > m/2, there exists one-to-one mapping to ^-1(u) for u ∈ (0,1/2), so for such x, we have (/)(x) ≤ (φ/g)(x), which implies η^q(x) < η^ℓ(x). Last, we prove η^q(u) ≤η^ℓ(u). Let C_m = √(π)m/√(2)(m+1), it suffices to show (x)/(x)≤C_m/√(m)φ(x)/g(x) ⟺ (x)/φ(x)≤C_m (x)/√(m) g(x), where (x) = ∑_k = x+1^m φ(k). Let R(x) = (x)/φ(x), H(x) = C_m(m - x)/√(m). The remaining proof consists three steps: we need to show 1) R(m) = 0 and H(m) = 0; 2) R(m/2) ≤ H(m/2); and 3) R(x) is concave on x ∈ (m/2, m). Proving 1) is trivial, as (m) = 0, so R(m) = H(m) = 0. Next, we verify 2). When m = m/2, H(m/2) = m √(π m)/2√(2) (m+1). Using Lemma <ref> and note that (x) = P(X > x) = ∑_k = x+1^m φ(k), (m/2) ≤√(m)/2φ(m/2; m-1) Y(0) exp(√(π/(2m))), where Y(0) = Φ̅(0)/ϕ(0) = √(π/2) and φ(m/2; m-1) = Bin(m/2; m -1, 1/2). Using that φ(m/2; m-1)/φ(m/2) = m/2(m-1), we obtain (m/2)/φ(m/2)≤m √(π m)/4√(2) (m -1) e^√(π/(2m)), which is less than H(m/2) if m ≥ 8. Last, we prove 3). We work with the continuous version of the function R(x) using gamma functions. The proof is inspired by that of Theorem 2(d) in <cit.>. To start, we calculate R'(x) = -1 - R(x)(γ(m - x + 1) - γ(x + 1)), where γ(x) = Γ'(x) / Γ(x) is the digamma function. Let K(x) = γ(m - x+1) - γ(x+1), then R'(x) = - 1 - R(x)K(x). To prove R(x) is concave, it is sufficient to show R(x) K(x) is increasing. Using that (m) = 0 and φ(m)/K(m) = 1/(2^m K(m)) ≈ 0 for m ≥ 8, let's write this product as R(x) K(x) = (x)/φ(x)/ K(x)≈(x) - (m)/φ(x)/K(x) - φ(m)/K(m). In view of the monotone form of l'Hospital’s rule, the above function is increasing if x →1/K'(x)/K^2(x) - 1 = d/dx(x)/d/dx(φ(x)/ K(x)) is increasing. It is then need to check the function K'(x)/K^2(x) is decreasing. By calculation, K'(x)/K^2(x) = -γ'(m-x+1) - γ'(x+1)/(γ(m - x + 1) - γ(x+1))^2. The denominator of (<ref>) is increasing: one can verify this by noting that γ(x+1) - γ(m-x+1) is an increasing function of x for x > m/2 using the well known fact that γ(s + 1) = ∑_k = 0^s k^-1 - E, E is the Euler–Mascheroni constant. The numerator of (<ref>) involves two trigamma functions. Using the integral representation of a trigamma function γ'(s + 1) = - ∫_0^1 t^s+1/1-tlog t dt, we obtain -γ'(m-x+1) - γ'(x+1) = ∫_0^1 (t^m-x + t^x) log t/1-t dt, which is decreasing as its derivative, which equals to ∫_0^1 (t^x - t^m-x) (log t)^2/(1-t) dt, is negative due to t ≤ 1 and x > m/2. § BOUNDING P_Θ_0 = 1/2(ℓ≤ T), P_Θ_0 = 1/2(Q ≤ T), AND P_Θ_0 = 1/2(Cℓ≤ T) For ℓ(x) defined in (<ref>) and let r(w, t) = wt/(1-w)(1-t), for any fixed t ∈ (0, 1) and w ≤ w_0 ∈ (0, 1), suppose log^2 (1/r(w,t))/m → 0 as m →∞ and define ε = 2η^ℓ(r(w,t)) - m + 1/m-1, then, as m →∞, ϕ(√(M)ε)/√(M)ε≤ P_θ_0 = 1/2(ℓ(x) ≤ t) ≤2(1+o(1)) ϕ(√(M)ε)/√(M)ε. In addition, for C > 2√(2/π), we have P_θ_0 = 1/2(ℓ(x) ≤ t) ≤C r(w,t)/√(m). By the definition of ℓ(x), P(ℓ(x) ≤ t) = P((φ/g)(x) ≤ r(w, t)). Let ũ = x - m/2, we rewrite (φ/g)(x) = (φ/g)(ũ + m/2). Note that (φ/g)(·) is symmetric at ũ, as φ(·) = Bin(m, 1/2) is symmetric at m/2 and g(·) = (1+m)^-1 is a constant. As we denote η^ℓ(u) = 1/m(φ/g)^-1(u) and η̃^ℓ(u) = η^ℓ(u) - 1/2, we have P((φ/g)(ũ + m/2) ≤ r(w, t)) = P(|ũ| ≥ mη̃^ℓ (r(w,t))) = 2(m η^ℓ (r(w,t))). We first prove (<ref>). Let K = mη^ℓ(r(w,t)) - 1 and M = m - 1, denote ε = (2K - M)/M, then by Lemma <ref>, (m η^ℓ (r(w,t))) = Φ̅(ε√(M))exp(A_m(ε)), where A_m(ε) = - Mε^4 γ(ε) - log (1-ε^2) - λ_m-K+1 + r_K+1, γ(ε) ∼ 1/12, λ_m-K+1 =O(1/m) and r_K+1 = O(1/m). By Lemma <ref>, the last display can be further bounded by (m η^ℓ (r(w,t))) ≤ϕ(ε√(M))exp(A_m(ε))/ε√(M). From Lemma <ref>, when m is large, we have K ∼ m/2 + √(m(log (1/r(w,t)) + log√(2m/π))/2). Therefore, ε∼√((log (1/r(w,t)) + log√(2m/π))/(2m)). By assumption m ≫log^2 (1/r(w, t))), A_m(ε) → 0. By collecting the relevant bounds, we obtain P((φ/g)(ũ + m/2) ≤ r(w, t)) ≤2 (1+o(1))ϕ( ε√(M))/ε√(M) For the lower bound, using the lower bound of the Gaussian tail in Lemma <ref>, we have Φ̅(ε√(M)) ≥Mε^2/1+Mε^2ϕ(ε√(M))/ε√(M)≥ϕ(ε√(M))/2ε√(M), as long as √(M)ε > 1. Also, note that exp(A_m(ε)) ≥ 1. Therefore, we have P((φ/g)(ũ + m/2) ≤ r(w, t)) ≥ϕ(ε√(M))/ε√(M). Next, to prove the upper bound in (<ref>), it is more convenience to bound (<ref>) directly. We will use the result that 2m (η̃^ℓ(r(w,t)))^2 = - log (r(w,t)) + log (C_m√(m))+ o(1) for some positive constant C_m < √(2/π)(1+1/m) < 2√(2/π). By invoking the Bernstein inequality in Lemma <ref> (choosing M = 1, V = m/4, and A = mη̃^ℓ(r(w,t))), we obtain P( |ũ| ≥ mη̃^ℓ(r(w,t))) ≤ 2exp( - m^2(η̃^ℓ(r(w,t)))^2/m/2 + mη̃^ℓ(r(w,t)) /3). Since -log (r(w,t))/m → 0 and log (√(m))/m → 0, we have m/2 ≫ mη̃^ℓ(r(w,t)) /3. Thus, the last display can be bounded by 2exp(- 2m (1- o(1)) (η̃^ℓ(r(w,t)))^2) ≤ C r(w,t)/√(m) for a sufficiently large m for C > C_m. For the ℓ(x) given in (<ref>), let r(w, t) = wt/(1-w)(1-t) for any fixed t ∈ (0, 1) and w ≤ w_0 ∈ (0, 1), suppose (log (1/r(w,t)))^2/m → 0 as m →∞ and define ε̃= 2η^ℓ(r(w,t)) - m + 1/m-1, then, ϕ(√(M)ε̃)/√(M)ε̃≤ P_θ_0 = 1/2(ℓ(x) ≤ t) ≤2(1+o(1)) ϕ(√(M)ε̃)/√(M)ε̃. In addition, for C' > 2, we have P_θ_0 = 1/2(ℓ(x) ≤ t) ≤ C' r(w,t). By the definition of ℓ(x)-value in (<ref>), we can write P(ℓ(x) ≤ t) = P((φ/g)(x) ≤√(2/(π m)) r(w,t) (1+m)). By replacing the upper bound in (<ref>) with √(2/(π m)) r(w,t) (1+m), we obtain P((φ/g)(ũ + m/2) ≤ r(w, t)) = P(|ũ| ≥η̃^ℓ(r(w,t))), where η̃^ℓ(r(w,t)) = η^ℓ(r(w,t))- 1/2. By Lemma <ref> and the assumption log^2 (1/r(w,t))/m → 0, 2m (η^ℓ(r(w,t)))^2 = - log (r(w,t)) + o(1). The remaining proof is exact the same as in Lemma <ref>, one just needs to replace η^ℓ with η^ℓ. We thus omit the detail. For q(x) defined in (<ref>) and let r(w, t) = wt/(1-w)(1-t), for any fixed t ∈ (0, 1) and w ∈ (0, 1), P_θ_0 = 1/2(q(x) ≤ t) = 2 r(w, t) (η^q(r(w,t))) ≤ 2 r(w, t). Recall that η^q(·) = 1/m(𝐁̅/)^-1(·) and let ũ = |x - m/2|, then by the definition of the q-value in (<ref>), P_θ_0 = 1/2(q(x) ≤ t) = P((𝐁̅/)(x) ≤ r(w,t)) = P(|ũ| ≥ m(η^q(r(w,t)) - 1/2) ) = 2 (m η^q(r(w,t))) = 2 r(w,t)(mη^q(r(w,t))). The upper bound follows trivially by noticing (mη^q(r(w,t))) ≤ 1 for any w, t < 1. § PROOF OF RESULTS IN SECTION <REF> In this section, we prove Theorem <ref> and Lemma <ref>. The concentration bound for ŵ in Section <ref> is used in our proof. An essential step for obtaining the uniform FDR control result is to apply Lemma <ref> to control those signals below the threshold ξ(w). §.§ Proof of Theorem <ref> The proof is divided into two parts depending on whether a solution for equation (<ref>) exists or not. We will now proof results for both the q-value and ℓ-value procedures together. Let denote either ^ℓ or ^q. Case 1. (<ref>) has a solution. By (i) of Lemma <ref>, one can bound P_θ_0(ŵ∉[w_2, w_1]) ≤ e^-Cκ^2 nw_1m̃(w_1) + e^-Cκ^2 nw_2m̃(w_2)≤ 2e^- 0.4 Cκ^2 s_n, as m̃(w_1) ≥ 0.4 for a sufficiently large m by Lemma <ref> and w_2 ≤ w_1 ≲ s_n/n by Lemma <ref>. The FDR can be thus bounded by (θ_0, (t, ŵ)) = 𝔼_θ_0( _(t, ŵ)/max{1, _(t, ŵ) + _ (t, ŵ)}) ≤𝔼_θ_0( _(t, ŵ)/max{1, _(t, ŵ) + _ (t, ŵ)}1{w_2 ≤ŵ≤ w_1}) + P_θ_0(ŵ∉[w_2, w_1]). Since _(t, w) and _(t, w) (one can check this from their definitions in (<ref>)) are monotone functions of w for either ^ℓ or ^q, applying Lemma <ref> and the monotonicity of the function x → x/(1+x), the first term in (<ref>) is bounded by 𝔼_θ_0( _(t, w_1)/max{1, _(t, w_1) + _ (t, w_2)}) ≤exp(-𝔼_θ_0_(t, w_2)) + 12𝔼_θ_0_(t, w_1)/𝔼_θ_0_(t, w_2). We need to obtain a lower bound for 𝔼_θ_0_(t, w_2) and an upper bound for 𝔼_θ_0_(t, w_1). Lower bound for 𝔼_θ_0_(w_2). By Lemma <ref>, η̃^ℓ(w_2) ≤ξ(w_2) and η̃^q(w_2) ≤ξ(w_2) for a sufficiently large m, we have 𝔼_θ_0_(w_2) ≥∑_j:θ_0,j≠ 1/2 P_θ_0,j(|X_j - m/2| ≥ mξ(w_2)) = ∑_j:θ_0,j≠ 1/2(_θ_0,j(m/2 + mξ(w_2)) + _θ_0,j(m/2 - mξ(w_2))) ≥∑_j:θ_0,j > 1/2_θ_0,j(m/2 + mξ(w_2)) + ∑_j:θ_0,j < 1/2_θ_0,j(m/2 - mξ(w_2)). Let μ_0,j = θ_0,j - 1/2, consider the following two cases: ξ(w_2) < |μ_0,j| and ξ(w_2) ≥ |μ_0,j |. If ξ(w_2) < |μ_0,j|, by Lemma <ref>, (<ref>) ≥1/2(∑_j:θ_0,j > 1/2_θ_0,j(m/2 + mξ(w_2)) + ∑_j:θ_0,j < 1/2_θ_0,j(m/2 - mξ(w_2)) ) +1/2∑_j: μ_0,j≠ 0Φ̅( √(m)(ξ(w_2) - |μ_0,j|)/√(1/2 + |μ_0,j|)). Due to ξ(w_2) < |μ_0,j|, Φ̅( √(m)(ξ(w_2) - |μ_0,j|)/√(1/2 + |μ_0,j|)) ≥Φ̅(2√(m)(ξ(w_2) - |μ_0,j|)). Therefore, by Corollary <ref> and note that T_m(μ, w_2) ≥1-K^-1/μ√(1-4μ^2)ξ(w_2) → 0 for a fixed μ = μ_0,j, for j ∈𝒥_0 in (<ref>), we obtain (<ref>) ≥1/2∑_j ∈𝒥_0(_θ_0,j(m/2 + mξ(w_2)) + Φ̅(2√(m)(ξ(w_2) - |μ_0,j|)) T_m(μ_0,j, w_2)) ≥w_2/2∑_j ∈𝒥_0 m_1(θ_0,j, μ). If ξ(w_2) ≥ |μ_0,j |, then with the assumption (log n)^2 ≪ m (which implies m ξ^4 → 0) and Lemma <ref>, we obtain _θ_0,j(m/2 + m ξ(w_2)) ≥1/2√(1-2|μ_0,j|/1-2ξ(w_2))Φ̅( 2√(m)(ξ(w_2) - |μ_0,j|)/√(1-4μ_0,j^2)). Using the lower bound in Lemma <ref>, Φ̅( 2√(m)(ξ(w_2) - |μ_0,j|)/√(1-4μ_0,j^2))/Φ̅( 2√(m)(ξ(w_2) - |μ_0,j|) )≥√(1-4ξ^2(w_2))/2exp(- 8mμ_0,j^2 (ξ(w_2)-|μ_0,j|)^2/1-4μ_0,j^2). Since |μ_0,j| is at most ξ(w_2), by the assumption m ξ^4(w_2) → 0, the exponential term in the last display is e^-o(1) and can be bounded from below by 1/2 if m is sufficiently large enough. Thus, we obtain _θ(m/2 + m ξ(w_2)) ≥1/4√(1-2|μ_0,j|/1-2ξ(w_2))Φ̅(2√(m)(ξ(w_2) - |μ_0,j|)) ≥1/4Φ̅(2√(m)(ξ(w_2) - |μ_0,j|)). We thus have (<ref>)≥1/8∑_j ∈𝒥_0(_θ_0,j(m/2 + mξ(w_2)) + Φ̅(2√(m)(ξ(w_2) - |μ_0,j|))) ≥w_2/8∑_j ∈𝒥_0 m_1(θ_0,j, μ), By combining both cases, we obtain (<ref>)≥w_2∑_j ∈𝒥_0 m_1(θ_0,j, μ)/8. Moreover, by (<ref>) and Lemma <ref>, there exists constants C, D > 0 such that ∑_j ∈𝒥_0 m_1(θ_0,j, μ) = (1+κ)(n - s_0) m̃(w_2) - Cn^1-Dm̃(w_2). The second term in the last display is of a smaller order of the first one, as 0.4 < m̃(w_2) ≤ 1.1 in Lemma <ref> for a sufficiently large n. Therefore, there exists a constant C' < 1/8 such that 𝔼_θ_0TD_(w_2)≥(<ref>)≥ C' (n-s_0) w_2 m̃(w_2). Upper bound for 𝔼_θ_0_(t, w_1). We derive the upper bound for the q-value and ℓ-value procedures separately. Consider the q-value procedure, by Lemma <ref> and Lemma <ref>, one obtains 𝔼_θ_0_^q(w_1) ≤∑_j: θ_0,j = 1/2𝔼_θ_0_j^q(w_1) = ∑_j: θ_0,j = 1/2 P_θ_0 (q(X_j; w_1) ≤ t) ≲w_1 t (n - s_0)/(1-w_1)(1-t)≤ C_1 (n-s_0) w_1 t, for C_1 = 1/((1-w_1)(1-t)). For the ℓ-value procedure, by Lemma <ref> and, again, Lemma <ref>, 𝔼_θ_0_^ℓ(w_1) ≤∑_j: θ_0,j = 1/2𝔼_θ_0_j^ℓ(w_1) = ∑_j: θ_0,j = 1/2 P_θ_0(ℓ(X_j; w_1) ≤ t) ≤ (n - s_0) ϕ( ε√(M)) e^A_m/ε√(M), where ε = 2K-M/M, K = m η^ℓ(r(w_1, t)) - 1, M = m-1, and A_m ≲ e^-C M ε^4. Using Lemma <ref>, η^ℓ(r(w_1, t)) - 1/2 ≤ζ(w_1) + log (t(1-t)^-1)/√(2mlog(1/w_1)) - K(η_∘) log(1/w_1)/√(2m) + C/√(2m log(1/w_1)) ≤√(log(1/w_1)/2m) + o(1), as w_1 ≲ s_n/n = n^v_1 - 1 and log (1/w_1) ≍log n ≪ m. By plugging-in this upper bound, (<ref>)≤(n - s_0) ϕ( 2√(M) (η^ℓ(r(w_1, t)) - 1/2)) e^A_m/2√(M)(η^ℓ(r(w_1, t)) - 1/2)≲(n - s_0) w_1/√(2log (1/w_1))≍(n - s_0) w_1/√(2 log n). Upper bound for the FDR. For the q-value procedure, combining (<ref>) and (<ref>), we obtain (<ref>)≤ e^-C'(n-s_0) w_2m̃(w_2) + C_2 t, and thus by (<ref>), for a sufficiently large n, sup_θ_0 ∈ℓ_0[s_n](θ_0, ^q(t, ŵ)) ≲ e^-C'(n-s_0) w_2m̃(w_2) + C_2 t. Similarly, for the ℓ-value procedure, by combining (<ref>) and (<ref>), we obtain sup_θ_0 ∈ℓ_0[s_n](θ_0, ^ℓ(t, ŵ)) ≲ e^- C'(n - s_0)w_2m̃(w_2) + C_3/√(log n). Case 2: (<ref>) does not have a solution. If (<ref>) does not have a solution, then one must have ∑_j ∈ S_0 m_1(θ_0,j, w) < (1-κ) (n-s_0) m̃(w) due to m̃(w) is continuous and monotone increasing and m_1(u, w) is continuous and monotone decreasing (see Lemma <ref>). By (ii) in Lemma <ref>, there exists a constant C and 1 ≤Δ_n ≪ n to be specified later such that P_θ_0(ŵ≥ w_0) ≤ e^-Cκ^2 Δ_n. Recall that n w_0 m̃(w_0) = Δ_n. For either ^ℓ or ^q, one can bound the FDR by (θ_0, (t, ŵ)) ≤ P_θ_0(j: θ_0,j = 1/2, _j(t, ŵ) = 1) ≤ (n-s_0) P_θ_0 = 1/2(_j(t, w_0) = 1) + P_θ_0(ŵ≥ w_0) ≤ (n-s_0) P_θ_0 = 1/2(_j(t, w_0) = 1) + e^-Cκ^2 Δ_n. We need to bound P_θ_0 = 1/2(_j(t, w_0) = 1). First, for the q-value procedure, using Lemma <ref>, P_θ_0 = 1/2(_j^q(t, w_0) = 1) = P_θ_0 = 1/2(q(w_0) ≤ t) ≤ 2r(w_0, t) = 2 w_0 t/(1-w_0)(1-t). One thus obtains (θ_0, ^q(t, ŵ)) ≤2 (n-s_0) w_0 t/(1-w_0) (1-t) + e^-Cκ^2 Δ_n≤2 (n-s_0) w_0 m̃(w_0) t/0.4 (1-w_0) (1-t) + e^-Cκ^2 Δ_n, as 0.4 ≤m̃(w) ≤ 1.1 by Lemma <ref>. Since w_0 is the solution of (<ref>), and t ≤ 4/5, the first term in the upper bound in (<ref>) can be bounded by 2 t Δ_n/0.4(1- n^-1)(1-t)≤ 25 (1+o(1)) t Δ_n, for a sufficiently large n. For the ℓ-value procedure, by Lemma <ref> and the upper bound in (<ref>), let ε̃= √(2log(1/r(w_0,t))/m), P_θ_0 = 1/2(ℓ(x) ≤ t) ≲2ϕ(√(M)ε̃)/√(M)ε̃≲2e^- M^2 ε̃^2/2/√(2M log (1/r(w_0,t)))≲√(2)r(w_0, t)/√(M log (1/r(w_0,t))) Therefore, if t ≤ 4/5, (θ_0, ^ℓ(t, ŵ)) ≲2(n-s_0) w_0 t/(1-w_0)(1-t) √(log n) + e^-Cκ^2 Δ_n≤25(1+o(1))Δ_n t/√(log n) + e^-Cκ^2 Δ_n. Combining Case 1 and Case 2. Combining (<ref>) and the upper bound in (<ref>), we obtain sup_θ_0 ∈ℓ_0[s_n](θ_0, ^q(t, ŵ)) ≤max{e^-C'(n-s_0)w_2m̃(w_2) + C_2 t, 25(1+o(1)) tΔ_n + e^-Cκ^2 Δ_n}. Using that m̃(·) ∈ [0.4, 1.1] for a sufficiently large m by Lemma <ref>, (n-s_0) w_2 m̃(w_2) ≳ (n-s_0) w_0 m̃(w_0) ≥ C”Δ_n, for some C” > 0. Then, sup_θ_0 ∈ℓ_0[s_n](θ_0, ^q(t, ŵ)) ≲max{e^-C”Δ_n+ C_2 t, 23 t Δ_n + e^-Cκ^2 Δ_n}. Choosing Δ_n = max{1/C”, log (1/t)/Cκ^2}, sup_θ_0 ∈ℓ_0[s_n](θ_0, ^q(t, ŵ)) ≲ tlog(1/t). For the ℓ-value procedure, combining (<ref>) and (<ref>) gives sup_θ_0 ∈ℓ_0[s_n](θ_0, ^ℓ(t, ŵ)) ≤max{ e^-C”Δ_n + C_3/√(log n), K t Δ_n/√(log n) + e^-Cκ^2 Δ_n}. To minimize the upper bound, we choose Δ_n = max{loglog n/2Cκ^2, loglog n/2C”} instead, which leads to sup_θ_0 ∈ℓ_0[s_n](θ_0, ^ℓ(t, ŵ)) ≲tloglog n/√(log n). §.§ Proof of Lemma <ref> If (<ref>) has a solution, then by following the proof of the ℓ-value procedures, one obtains (θ_0, ^ℓ(t, ŵ)) ≤exp(-𝔼_θ_0_^ℓ (t, w_2)) + 12𝔼_θ_0_^ℓ(t, w_1)/𝔼_θ_0_^ℓ(t, w_2) + P_θ_0 (ŵ∉[w_2, w_1]). Lower bound for 𝔼_θ_0_^ℓ (w_1). By definition, 𝔼_θ_0_^ℓ (t, w_1) = ∑_j: θ_0,j≠ 1/2𝔼_θ_0_j^ℓ(t, w_1) = ∑_j: θ_0,j≠ 1/2 P_θ_0(ℓ(X_j; w_1) ≤ t) ≥∑_j: θ_0,j > 1/2_θ_0,j (m/2 + m η̃^ℓ(t, w_2)) + ∑_j: θ_0,j < 1/2_θ_0,j (m/2 - m η̃^ℓ(t, w_2)). By Lemma <ref>, |2m(η̃^ℓ(r(w,t)))^2 -2mξ^2 (w)| ≤ |log(t(1-t)^-1))^2| + C Thus, by Lemma <ref>, (<ref>)≥ C_t ( ∑_j: θ_0,j > 1/2_θ_0,j (m/2 + m ξ(w_2)) + ∑_j: θ_0,j < 1/2_θ_0,j (m/2 - m ξ(w_2)) ), for some positive fixed C_t depending on t and θ_0. Thus, for a large enough n, 𝔼_θ_0_^ℓ (w_1) ≥ C' (n-s_0) w_2 m̃(w_2), where C' is some constant depends on t and θ_0. Upper bound for 𝔼_θ_0_^ℓ (w_2). We have 𝔼_θ_0_^ℓ (w_1) = ∑_j: θ_0,j = 1/2𝔼_θ_0_j^ℓ(w_1) = ∑_j: θ_0,j = 1/2 P_θ_0(ℓ(X_j; w_1) ≤ t) = (n - s_0) (m η^ℓ(r(w_1, t))). By Lemma <ref>, let M = m-1 and K = m η^ℓ(r(w_1, t)) - 1, for ε = 2K-M/M, (m η^ℓ(r(w_1, t))) = Φ̅( ε√(M)) e^A_m≤ϕ( ε√(M)) e^A_m/ε√(M). The last inequality is obtained using Lemma <ref>. Therefore, (<ref>) can be bounded by (n - s_0) ϕ( 2√(M) (η^ℓ(r(w_1, t)) - 1/2)) e^A_m/2√(M)(η^ℓ(r(w_1, t)) - 1/2)≲(n - s_0) w_1/√(2m (log (1/w_1) + log√(m))). Since w_1 ≲ s_n/n = n^v_1-1 by Lemma <ref> and using Lemma <ref>., 𝔼_θ_0_^ℓ (w_1) ≲(n-s_0) w_1/√(2(1-v_1)m log n)≲(n-s_0) w_2 m̃(w_2)/√(m log n). Combining (<ref>) and (<ref>), (θ_0, ^ℓ(t, ŵ)) ≲ e^-C'(n-s_0) w_2 m̃(w_2) + 2Ks_n/√(m)ξ(w_2) + 12 (n-s_0) w_2 m̃(w_2)/√(mlog n)/(n-s_0) w_2m̃(w_2) - 2Ks_n/√(mξ^2(w_2)). Using (<ref>), the last display can be bounded by e^-C”Δ_n + 2Ks_n/√(m)ξ(w_2) + 12/√(m log n). If a solution (<ref>) does not exist, then (θ_0, (t, ŵ)) ≤ (n-s_0) P_θ_0 = 1/2(_j^ℓ(t, w_0) = 1) + e^-Cκ^2 Δ_n. By applying the upper bound in Lemma <ref>, one obtains P_θ_0 = 1/2(ℓ(x) ≤ t) ≲ϕ(√(M)ε̃')/√(M)ε̃', where ε̃' ∼√(2(log(1/r(w_0,t)) + log(√(m)))/m). We thus obtain, ϕ(√(M)ε̃')/√(M)ε̃'≲e^- M^2 (ε̃')^2/2/√(2M (log (1/r(w_0,t)) + log√(m)))≲r(w_0, t)/√(m)/√(2M (log (1/r(w_0,t)) + log√(m))). Therefore, for any t ≤ 4/5, (θ_0, ^ℓ(t, ŵ)) ≲2(n-s_0) w_0 t/(1-w_0) (1-t) √(m log n) + e^-Cκ^2 Δ_n≤25(1+o(1))Δ_n t/√(mlog n) + e^-Cκ^2 Δ_n. By combining the two cases, we thus obtain (θ_0, ^ℓ(t, ŵ)) ≤ max{e^-C”Δ_n + 2Ks_n/√(m)ξ(w_2) + 12/√(m log n), 25(1+o(1))Δ_n t/√(mlog n) + e^-Cκ^2 Δ_n}. Letting Δ_n = max{1/2C”log (m log n), 1/2Cκ^2log(m log n)}, we thus obtain the upper bound. § PROOF OF RESULTS IN SECTION <REF> In this section, we prove results in Section <ref>. The proof of Theorem <ref> is given in Section <ref> and the proof of FNR results in Theorem <ref> and Lemma <ref> is provided in <ref>. §.§ Proof of Theorem <ref> We first prove the upper bound. By definition, sup_θ_0 ∈Θ_0[s_n] (θ_0, ^q(t, ŵ)) = sup_θ_0 ∈Θ_0[s_n]𝔼_θ_0[ ∑_j=1^n 1{θ_0,j = 1/2}_j^q(t, ŵ)/1 ∨∑_j=1^n _j^q(t, ŵ)], where recall that _j^q(t, ŵ) = 1{q(X_j; ŵ, g) ≤ t} and q(x; w, g) = (1 + w/1-w (g/φ)(x))^-1. Let Ω_n = Ω_0 ∩ P(ŵ∈ [w_1, w_2]), where Ω_0 = {#{j ∈𝒮_0, |X_j - m/2| > bmζ_n}≥ s_n - K_n}, and w_1 and w_2 are the solutions for (<ref>) and (<ref>) respectively, then the last display can be bounded by sup_θ∈Θ_0[s_n]𝔼_θ_0[ ∑_j=1^n 1{θ_0,j = 1/2}^q_j(t, ŵ)/1 ∨∑_j=1^n ^q_j(t, ŵ)1{Ω_n}] + P(Ω_0^c) + P(ŵ∉[w_1, w_2]). By Lemma <ref> and Lemma <ref>, we have P(Ω_0^c) + P(ŵ∉[w_1, w_2]) = o(1). Also, with Ω_n, the denominator can be bounded from below by ∑_j=1^n ^q_j(t, ŵ) ≥∑_j=1^n 1{θ_0,j = 1/2}_j^q(t, ŵ) + s_n - K_n. Therefore, (<ref>) can be bounded by sup_θ_0 ∈Θ_0[s_n](θ_0, ^q(t, ŵ)) ≤sup_θ∈Θ_0[s_n]𝔼_θ_0[ ∑_j=1^n 1{θ_0,j = 1/2}^q_j(t, w_1)/∑_j=1^n 1{θ_0,j = 1/2}^q_j(t, w_1) + s_n -K_n] + o(1) ≤sup_θ_0 ∈Θ_0[s_n]𝔼_θ_0(∑_j=1^n 1{θ_0,j = 1/2}^q_j (t, w_1))/sup_θ_0 ∈Θ_0[s_n]𝔼_θ_0(∑_j=1^n 1{θ_0,j = 1/2}^q_j (t, w_1)) + s_n - K_n + o(1), by concavity and monotonicity of the function x ∈ (0, ∞) → x/(x+1). What remains is to bound the two expectations in (<ref>), which are indentical. Recall η^q defined in (<ref>), by Lemma <ref>, we have sup_θ_0 ∈Θ_0[s_n] 𝔼_θ_0(∑_j=1^n 1{θ_0,j = 1/2}_j^q(t; w_1) ) ≤ C(n - s_n) r(w_1, t) = C(n - s_n) w_1 t (1-t)^-1 (1-w_1)^-1≲ s_n (1- C's_n/n) t(1-t)^-1 (1 + ϵ), as (1-w_1)^-1≤ 1+ϵ for some ϵ = o(1). The last inequality is obtained using Lemma <ref>. Therefore, as long as s_n/n → 0, (<ref>) can be bounded by (1+ϵ)(1-ϵ') s_n t(1-t)^-1/(1-ϵ')(1+ϵ) s_n t(1-t)^-1 + s_n - K_n→t(1-t)^-1/t(1-t)^-1 + 1 - o(1)→ t, as K_n = o(s_n) and both ϵ, ϵ' = o(1). Next, to prove the lower bound, we have inf_θ_0 ∈Θ_0[s_n]𝔼_θ_0(_^q(t, ŵ)/_^q(t, w) + _^q(t, ŵ)1{ŵ∈ [w_1, w_2] }) ≥inf_θ_0 ∈Θ_0[s_n]𝔼_θ_0( 𝔼_θ_0 (_^q(t, ŵ)) (1-δ)/𝔼_θ_0 (_^q(t, ŵ)) (1-δ) + s_n1{ŵ∈ [w_1, w_2] } ×1{|_^q(t, ŵ) - 𝔼_θ_0 (_^q(t, ŵ))| ≤δ𝔼_θ_0 (_^q(t, ŵ))}), for some small δ to be specified later. On the event ŵ∈ [w_1, w_2], we have 𝔼_θ_0(_^q(t, ŵ)) ≥𝔼_θ_0(_^q(t, w_2)) = ∑_j: θ_0,j = 1/2𝔼_θ_0_j^q(t, w_2) = (n - s_n) P_θ_0(|X_j - m/2| ≥ mη^q(r(w,t)) - m/2) = 2 (n - s_n) r(w_2, t) (mη^q(r(w_2, t)) - m/2) By Lemma <ref>, for any w ∈ (0, 1) and fixed t ∈ (0, 1), (mη^q(r(w_2, t)) - m/2) = m/2 - mη^q(r(w_2, t)) /m+1 = m/2 - mζ(w_2) + o(mζ(w_2))/m+1, there exist an ε∈ (0, 1) such that 𝔼_θ_0(_^q(t, ŵ)) ≥ (n - s_n) r(w_2, t)(1-ε) = (n-s_n) w_2 t (1-w_2)^-1 (1-t)^-1 (1-ε) = w_2 t (1-w_2)^-1 (1-t)^-1 (1-ε) ∑_j ∈𝒮_0 m_1(θ_0,j, w_2) (1+κ)^-1 (m̃(w_2))^-1 ≥ (1-ε')^2 s_n (1+κ)^-1 t(1-t)^-1, by Lemma <ref>. On the other hand, by Chebychev's inequality, sup_θ_0 ∈Θ[s_n] P_θ_0(|_^q(t, ŵ) - 𝔼_θ_0 (_^q(t, ŵ))| > δ𝔼_θ_0 (_^q(t, ŵ))) ≤Var_θ_0 (_^q(t, ŵ))/δ^2 (𝔼_θ_0(_^q(t, ŵ)))^2≤1/δ^2 𝔼_θ_0(_^q(t, ŵ))→ 0, for any fixed δ∈ (0, 1), as s_n →∞. We combine the relevant lower bounds obtained above and obtain lim_n→∞inf_θ_0 ∈Θ_0[s_n](θ_0, ^q(t, ŵ)) ≥(1-ε')^-1(1-δ)(1+κ)^-1 t(1-t)^-1 s_n/(1-ε')^-1(1-δ)(1+ν)^-1 t(1-t)^-1s_n + s_n +o(1) → t + o(1) → t, as n→∞, by letting κ→ 0 and δ→ 0 (but not too fast as long as δ s_n →∞; e.g., choosing δ = 1/√(s_n)). We thus complete the proof. §.§ Proof of FNR results for the ℓ-value, q-value, and Cℓ-value procedures §.§.§ Proof of Theorem <ref> Since q-values are less conservative than ℓ-values, it is enough to prove the ℓ-value result. By the definition of , sup_θ_0 ∈Θ_0[s_n](θ_0, ^ℓ(t, ŵ)) = sup_θ_0 ∈Θ_0[s_n]𝔼_θ_0( s_n - ∑_j=1^n 1{θ_0,j≠ 1/2}_j^ℓ(t, ŵ)) /s_n ∨ 1). Let η̃^ℓ(r(ŵ, t)) = η^ℓ(r(ŵ, t)) - 1/2, we have P_θ_0_j^ℓ(t, ŵ) ≥ P_θ_0(|X_j - m/2| ≥ mη̃^ℓ(r(ŵ, t))). By Lemma <ref>, we have η̃^ℓ(r(ŵ, t)) ≤ζ(ŵ) + C(t, w_0)/√(2m log(1/ŵ)) + K(η_0) √(log(1/ŵ)/2m), where C(t,w_0) = log(t(1-t)^-1) + C(w_0), C(w_0) is a constant. Consider the event 𝒲 = {ŵ∈ [w_2, w_1]}, then P_θ_0 (𝒲^c) = o(1) by Lemma <ref>. On the event 𝒲, applying Lemma <ref>, we obtain ζ(w_1) ≤ζ(ŵ) ≤ζ(w_2) ≤ζ(C's_n/n) and log(n/(C' s_n)) ≤log(1/w_1) ≤log(1/ŵ) ≤log(1/w_2) < log n. Since m ≫ (log n)^2, for a sufficiently large n, η̃^ℓ(r(ŵ, t)) ≤ 2ζ(C's_n/n). We thus can bound P_θ_0_j^ℓ(t, ŵ) from below by P_θ_0(|X_j - m/2| ≥ 2mζ(C's_n/n)). Applying Lemma <ref> for some K_n to be specify later (choosing b = 2 in Lemma <ref>), we thus obtain (<ref>) ≤sup_θ_0 ∈Θ_0[s_n]{𝔼_θ_0( s_n - ∑_j=1^n 1{θ_0,j≠ 1/2}_j^ℓ(t, ŵ)) /s_n ∨ 11{Ω_n ∩𝒲}) + P_θ_0(𝒲^c) + P_θ_0 (Ω_n^c) } ≤K_n/s_n ∨ 1 + o(1). Now we choose K_n = max (2s_n p_n, s_n/log s_n) with p_n = 2(m/2 + k √(m log(n/s_n)/2)), it is easy to verify that K_n = o(s_n). Therefore, the last display goes to 0 as s_n →∞. §.§.§ Proof of Lemma <ref> Let's consider the event 𝒲 = {ŵ∈ [w_2, w_1]}, we have lim_n inf_θ_0 ∈Θ_0[s_n](θ_0, ^ℓ) ≥inf_θ_0 ∈Θ_0[s_n]𝔼_θ_0( ∑_j =1 ^n 1{θ_0,j≠ 1/2} (1-_j^ℓ (t; ŵ))/s_n ∨ 11{𝒲}) ≥ 1 - sup_θ_0 ∈Θ_0[s_n]E_θ_0( ∑_j=1^n 1{θ_0,j≠ 1/2}_j^ℓ (t, w_2)/s_n) ≥ 1 - sup_θ_0 ∈Θ_0[s_n](max_j P_θ_0 (ℓ(X_j) ≤ t)), as _j^ℓ(t, w) is a decreasing function as w increases for each j. By the definition of ℓ-value, sup_θ_0 ∈Θ_0[s_n]( max_j P_θ_0(ℓ(X_j) ≤ t) ) = sup_θ_0 ∈Θ_0[s_n](max_j P_θ_0((φ/g)(X_j) ≤ r(w_2,t))) ≤sup_θ_0 ∈Θ_0[s_n](max_j P_θ_0(|ũ_j| ≥ mη̃^ℓ(r(w_2, t)) - mζ(s_n/n))), which we used |θ_0,j-1/2| ≥ζ(s_n/n), where ũ_j = X_j - m θ_0,j is a centered variable and η̃^ℓ(·) = (φ/g)^-1(·) - 1/2. Applying the Bernstein's inequality in Lemma <ref>, and we let A = mη̃^ℓ(r(w_2, t)) - mζ_n(n/s_n), M = 1, and V = ∑_j=1^m θ_0,j(1-θ_0,j) ≤ m/4, then sup_θ_0 ∈Θ_0[s_n](max_j P_θ_0(|ũ_j| ≥ mη̃^ℓ(r(w_2, t)) - mζ(s_n/n))) ≤ 2exp(- A^2/m/2 + 2A/3). By Lemma <ref>, 2m|η̃^ℓ(r(w_2, t))) - (ξ(w_2))| ≤ C_t/ ξ(w_2), C_t is a fixed constant depending on t. Since ξ(w_2) ∼√((log(1/w_2) + log(√(m)))/(2m)) by (<ref>) and w_2 ≤ w_1 ≲ s_n/n by Lemma <ref>, 2A/3 ≤ m/2 for a sufficiently large m, therefore, the last display can be further bounded by 2exp(- 1/m( √(m/2log((C' n /s_n) + log(√(m)) )) - √(2m C_t^2/log (√(m) n)) - √(m/2log(n/s_n)))^2 ) ≤ 2 exp( - C' log(√(m))/2 +o(1) ) ≲ 2 m^- C'/4→ 0, as m →∞. The inequality (a - b)^2 > (a^2 +b^2)/2 for any a, b > 0 is used to obtain the first upper bound in the last display. The upper bound in the last display implies that (<ref>)≥ 1 - 2 m^- C'/4→ 1 as m →∞. § TIGHT CONCENTRATION BOUNDS FOR THE MMLE Ŵ Analyzing the behavior of the MMLE ŵ is the most challenging part of our proof. The proving strategy can be summarized as follows: We first describe the intuition for bounding ŵ. Since ŵ depends on X_1, …, X_n, it is a random quantity. We will show that ŵ concentrates around the value s_n/n in a high probability. Let w^⋆ be the solution for 𝔼_θ_0 S(w^⋆) = 0, then when the signal is strong enough, one would expect ŵ is close to the solution w^⋆. Recall the definitions of m̃(w) and m_1(θ_0,j, w) in (<ref>) and (<ref>) respectively. Consider the following equation and let w_1 be its solution: ∑_j ∈𝒮_0 m_1(θ_0,j, w) = (1-κ)(n - s_0) m̃ (w), where κ∈ (0, 1) is a fixed constant, w ∈ [w_0, 1), and θ_0 ∈ℓ_0[s_n]. Depending on κ, n, m, s_n and the true value θ_0, a solution for (<ref>) may or may not exist. If a solution does exist, the solution must be unique, as m̃(w) is monotone increasing and m̃(u, w) is monotone decreasing, again by Lemma <ref>. The next lemma shows that if (<ref>) does not have a solution, then P_θ_0(ŵ≤ w_0) → 1; on the other hand, if (<ref>) has a solution, then P_θ_0(ŵ∉[w_2, w_1]) → 0, where w_2 is the solution for ∑_j ∈𝒮_0 m_1(θ_0,j, w) = (1+κ)(n - s_0) m̃ (w), The relation between w_1 and w_2 is provided in Lemma <ref>. For w ∈ (0, 1) and u ∈ [0, m], m̃(w) is a nonnegative, continuous, monotone increasing function and m_1(u, w) is a continuous, monotone decreasing function. Since β(u, w) is a decreasing function with w, by the definition of m̃ in (<ref>) and note that φ(x) is independent of u, so m̃(w), which is a function of -β(u, w), is a monotone increasing function. We have showed that m̃(w) is a nonnegative function in Lemma <ref>. From the definition of m_1(u, w) in (<ref>), it is a decreasing function as β(u,w) is a decreasing function. The continuity result for m̃(w) (resp. m_1(u, w)) follows by the continuity of β(u,w) and domination of β(u, w)φ(u) (resp. β(u, w)φ_θ(u)) by g(u) + φ(u) (resp. g(u) + φ_θ(u)) up to a constant. Let w_1 and w_2 be solutions of (<ref>) and (<ref>) respectively, then w_1/K < w_2 < w_1 ≲ s_n/n. for some constant K > 1. We first show w_1 ≲ s_n. By definition (1-κ)(n - s_0) m̃ (w_1) ≤∑_j ∈𝒮_0 m_1(θ_0,j, w_1) ≤ s_n max_j ∈𝒮_0 m_1(θ_0,j, w_1). From Lemma <ref>, depending on the value of θ_0,j, m_1(θ_0,j, w_1) is bounded by (<ref>), (<ref>) or (<ref>). If |θ_0,j - 1/2| > Λ/√(2m), (<ref>)≤2/w_1_θ(m/2 + mξ(w_1)) + 4/w_1Φ̅(2√(m) (ξ(w_1) - |μ_0,j|)) ≤6/w_1. If 1/2mξ(w_1)≤ |θ_0,j - 1/2| ≤Λ/√(2m), then 1-4μ_0,j^2 ≈ 1, (<ref>)≲ e^-2m(ξ(w_1) - μ)^2 + 2mζ^2(w_1) + 2mν^2≤1/w_1 e^-2m(ξ(w_1) - μ)^2 + log√(m)≤1/w_1, where we used that ξ^2(w) = ζ^2(w) + ν^2. Last, if |θ_0,j - 1/2| < 1/2mξ(w_1), again, 1-4μ_0,j^2 ≈ 1, (<ref>)≲ζ(w_1) + w_1^C_μ_0,j/√(m) < 1. Therefore, max_j m_1(θ_0,j, w_1) ≲ w_1^-1. Thus, (<ref>) implies w_1 ≲s_n/(1-κ)(n-s_0) m̃(w_1)≤Cs_n/n, as m̃(w_1) ∈ [0.4, 1.1] by Lemma <ref> and s_0/n ≤ s_n/n → 0 as n →∞. The fixed constant C depends on the values of κ, v_1, v_2 and μ_0,j. Next, we prove the inequality w_2 < w_1. Lemma <ref> suggests that m_1(·, w) is a continuous monotone decreasing function and m̃(w) is a continuous monotone increasing function, thus, the ratio m_1(·, w)/m̃(w) is monotone decreasing. From (<ref>) and (<ref>), we have ∑_j ∈𝒮_0 m_1(θ_0,j, w_1)/m̃(w_1)/∑_j ∈𝒮_0 m_1(θ_0,j, w_2)/m̃(w_2) = 1-κ/1+κ < 1 for any κ∈ (0, 1), which implies w_2 < w_1. Last, we show w_2 > w_1/K. Introducing the set 𝒥_0 := 𝒥(θ_0, w, K) = {1 ≤ j ≤ n: |θ_0,j - 1/2| ≥ζ(w)/K}. Define ℳ^𝒮_0(w) = ∑_j∈𝒮_0 m_1(θ_0, j, w), ℳ^𝒥_0(w, K') = ∑_j∈𝒥_0 m_1(θ_0, j, w). Since m_1(·, w)/m̃(w) is a monotone decreasing function, it is sufficient to show ℳ^𝒮_0(w_1/K)/m̃(w_1/K) > ℳ^𝒮_0(w_2)/m̃(w_2) > 1+κ/1-κ×ℳ^𝒮_0(w_1)/m̃(w_1). By Lemma <ref>, for w= w_1 or w_1/K, sup_θ_0 ∈ℓ_0[s_n]sup_w ∈ [1/n, 1/log n] |ℳ^𝒮_0(w) - ℳ^𝒥_0(w, K)| ≤ C n^1-D, D ∈ (0, 1), C >0, Therefore, by Lemma <ref>, for some K > 1 to be chosen later, we have ℳ^𝒮_0(w_1/K) ≥ℳ^𝒥_0(w_1/K, K') - Cn^1-D ≥ K ℳ^𝒥_0(w_1, K') - Cn^1-D ≥ K ℳ^𝒮_0(w_1) - 2Cn^1-D. Since ℳ^𝒮_0(w_1) is in the order of n, but the second term in the last lower bound is o(n), we thus can bound ℳ^𝒮_0(w_1/K) ≥ Kℳ^𝒮_0(w_1)/2. As 0.4 ≤m̃(w) ≤ 1.1 by Lemma <ref> if m is sufficiently large enough, we thus obtain ℳ^𝒮_0(w_1/K)/m̃(w_1/K)≥Kℳ^𝒮_0(w_1)/4m̃(w_1). Choosing K > 4(1+κ)/(1-κ) leads to the inequality in (<ref>). We thus complete the proof. Let w_1 and w_2 be solutions of (<ref>) and (<ref>) respectively and w_0 be solution of (<ref>), suppose (log n)^2/m → 0, s_n = n^v_1 and m= n^v_2 for v_1 ∈ (0, 1) and v_2 > loglog n/log n, (i) if (<ref>) has a solution, then for a sufficiently large n, there exists some positive constant C such that for θ_0 ∈ℓ_0[s_n] and any fixed κ∈ (0, 1), then P_θ_0 (ŵ∉[w_2, w_1]) ≤ e^-Cκ^2 n w_1 m̃(w_1) + e^-Cκ^2 n w_2 m̃(w_2) . (ii) If a solution for (<ref>) does not exist, let w_0 be the solution of n w_0 m̃(w_0) = Δ_n, Δ_n ∈ [1.1, ρ_n], for some 1.1 < ρ_n ≪ n, then for a sufficiently large n and the same κ, C as in (i), P_θ_0 (ŵ≥ w_0) ≤ e^-Cκ^2 n w_0 m̃(w_0) = e^- Cκ^2 Δ_n. For a sufficiently large m, m̃(w_0) ∈ [0.4, 1.1] by Lemma <ref>, providing that w_0 → 0, which is true since we require Δ_n/n → 0. Indeed, by rewriting (<ref>), one immediately obtains w_0 = Δ_n/n m̃(w_0). Since Δ_n ≥ 1.1, w_0 ≥ 1/n. Also, since Δ_n ≤ρ_n = o(n), w_0 < 1. Therefore, for ŵ∈ [w_0, 1], ŵ still belongs to the range [1/n, 1] given in (<ref>). We first prove (i): since a solution for (<ref>) exists, the event {ŵ≥ w_1 } implies {S(w_1)≥ 0}. Therefore, P_θ_0 (ŵ≥ w_1) = P_θ_0 (S(w_1) ≥ 0) = P_θ_0 (S(w_1) - 𝔼_θ_0S(w_1) ≥ - 𝔼_θ_0S(w_1)) Since 𝔼_θ_0S(w_1) = ∑_j ∈𝒮_0 m_1(θ_0, j, w_1) - (n-s_0) m̃(w_1) ≤ - κ (n-s_0) m̃(w_1) by (<ref>), the last display is bounded by P_θ_0 (S(w_1) - 𝔼_θ_0S(w_1) ≥κ (n-s_0) m̃(w_1)). Let W_j = β(X_j, w) - 𝔼_θ_0β(X_j, w), j ∈{1, …, n}, then W_j is a centered variable, independently with W_j' for j ≠ j'. Also, |W_j| ≤ |β(X_j, w_1)| ≤ w_1^-1 and ∑_j = 1^n Var(W_j) ≤∑_j=1^n m_2(θ_0,j, w_1) = ∑_{j: |μ_0,j| > Λ/√(2m)} m_2(θ_0,j, w_1) + ∑_{j: |μ_0,j| ≤Λ/√(2m)}m_2(θ_0,j, w_1) = (a) + (b). First, using that m_2(θ_0,j, w_1) ≲ w^-1 m_1(θ_0,j, w_1) in Corollary <ref>, (a) ≲1/w∑_{j: |μ_0,j| > Λ/√(2m)} m_1(θ_0,j, w_1) = 1/w (1-κ)(n-s_0)m̃(w_1) - 1/w∑_{j: |μ_0,j| ≤Λ/√(2m)} m_1(θ_0,j, w_1) ≤ w_1^-1 (1-κ)(n-s_0)m̃(w_1) ≤ w_1^-1 (1-κ)n m̃(w_1). Next, we use that #{j: |μ_0,j| ≤Λ/√(2m)}≤ s_0 ≤ s_n and by (<ref>) and (<ref>) in Lemma <ref>, (b) ≲1/w_1{∑_j: 1/2mξ≤ |μ_0,j| ≤Λ/√(2m)(1/√(m|ξ - μ_0,j|) + 1/(m+1)|μ_0,j|) exp(-2mμ_0,j^2 + 4m |μ_0,j| ξ/1-4μ_0,j^2) + ∑_j: |μ_0,j| < 1/2mξ(ζ + w^C_μ_0,j/√(m)) } ≲s_n/w_1 e^-2m (ξ - Λ/√(2m))^2 + 2mξ^2≤s_n/w_1 e^-2m (ξ - Λ/√(2m))^2 + 2m ν^2 + 2mζ^2 ≤s_n/w_1 e^2mζ^2 = s_n/w_1^2. where we used ζ^2 + ν^2 = ξ^2 and 2mζ(w_1)^2 ≍log(1/w_1). Since w_1 ≤ Cs_n/n by Lemma <ref>, the last display is bounded by w_1^-1 n/C. Therefore, ∑_j=1^n Var(W_j) ≤ (a) + (b) ≤ 2w_1^-1n m̃(w_1), as m̃(w_1) ∈ [0.4, 1.1] by Lemma <ref>. Applying Lemma <ref>, the Bernstein inequality, which we choose A = κ(n-s_0)m̃(w_1), M ≤ w_1^-1, and V = 2 w_1^-1 nm̃(w_1), noting that s_0/n ≤ s_n/n = n^v_1 - 1→ 0 as n→∞, one obtains P_θ_0(ŵ≥ w_1) ≤ e^- 3/14κ^2 n w_1 m̃(w_1). To bound the probability P_θ_0 (ŵ≤ w_2), one can proceed in a similar way. We thus obtain P_θ_0 (ŵ≤ w_2) = P_θ_0 (S(w_2) ≤ 0) = P_θ_0 (S(w_2) - 𝔼_θ_0S(w_2) ≤ - 𝔼_θ_0S(w_2)) Define 𝔼_θ_0S(w_2) = ∑_j ∈S_0 m_1(θ_0, j, w_2) - (n-s_0) m̃(w_2) ≤ - κ (n-s_0) m̃(w_2) by (<ref>), one needs to bound P_θ_0 (S(w_2) - 𝔼_θ_0S(w_2) ≤κ (n-s_0) m̃(w_2)). Applying the Bernstein inequality again, we obtain P_θ_0(ŵ≤ w_2) ≤ e^- 3/14κ^2 n w_2 m̃(w_2). By combining the two upper bounds, we obtain the result in (i). The proof for (ii) is similar, for w_0 the solution of (<ref>) and using that ∑_j ∈𝒮_0 m_1(θ_0,j, w_0) < (1-κ) (n-s_0) m̃(w_0), we thus have P_θ_0(ŵ≥ w_0) = P_θ_0(S(w_0) ≥ 0) = P_θ_0(S(w_1) - 𝔼_θ_0S(w_0) ≥ - 𝔼_θ_0S(w_0)) ≤ P_θ_0(S(w_1) - 𝔼_θ_0S(w_0) ≥ - κ (n-s_0) m̃(w_0)). By using the Bernstein inequality again, we obtain the upper bound. Let n w_0 m̃(w_0) = L_n, we thus complete the proof. § BOUNDING M̃(W), M_1(Θ, W), AND M_2(Θ,W) In this section, we obtain bounds for the three moment-related quantities m̃(w), m_1(u, w), and m_2(u,w) given in (<ref>)–(<ref>) respectively. Those bounds are essential for bounding the MMLE ŵ and proving a uniform FDR control result for our multiple testing procedures. Note that in the study of the Gaussian sequence model in <cit.> and , similar moment-related quantities also appear. We will briefly comment on the difference between their bounds in the Gaussian setting and ours in the Bernoulli setting. §.§ Upper and lower bounds for m̃(w) For m̃(w) given in (<ref>), let ξ = |ξ_n(w)| as in Lemma <ref>, if log (1+1/w)/m → 0 as m →∞, then m/2 - mξ/m+1 - 1/√(m)(1+w^-1)≤m̃(w) ≤ 1 + 2wξ/1-w, Furthermore, if w → 0 and m is sufficiently large, then 0.4 ≤m̃(w) ≤ 1.1. By definition, m̃(w) = - ∑_u=0^m β(u)(1 + wβ(u))^-1φ(u). Since g(u) = (1+m)^-1, we have ∑_u=0^m β(u) φ(u) = ∑_u=0^m g(u) - 1 = 0, we thus can write m̃(w) = ∑_u=0^m β(u) φ(u) - ∑_u=0^m β(u) φ(u)/1 + wβ(u) = ∑_u=0^m wβ(u)^2φ(u)/1 + wβ(u), which is positive as 1 + wβ(m/2) ≥ 0 for any w∈ (0, 1) by Lemma <ref>. Since β(u) is symmetric at u = m/2, we then have ∑_u=m/2+1^m 2wβ(u)^2φ(u)/1 + wβ(u)≤m̃(w) ≤∑_u=m/2^m 2wβ(u)^2φ(u)/1 + wβ(u). Recall that m is assumed to be an even number throughout the paper. When m is an odd number, the last display should be replaced by m̃(w) = ∑_u = ⌈ m/2 ⌉^m 2wβ(u)^2φ(u)/1 + wβ(u). The difference is minor and shall not change our results. Next, we will first derive the lower bound for m̃(w). Derive the lower bound. The lower bound in (<ref>) can be written as ∑_u=m/2+1^m 2wβ(u)^2φ(u)/1 + wβ(u) = ∑_u = m/2 + 1^m/2 + mξ2wβ^2(u) φ(u)/1 + wβ(u)_(I) + ∑_u = m/2 + mξ^m2wβ(u) (g - φ)(u)/1 + wβ(u)_(II), which we used β(u) φ(u) = (g-φ)(u). Since 1 + wβ(u) > 0 for u ∈ [m/2+1, m/2 + mξ], we get (I) > 0. Using that β(u) ≥ 0 when u > m/2 + mξ by Lemma <ref>, one obtains g(u) ≥φ(u) and w β(u) ≥ 1. We thus have (II) ≥∑_u = m/2 + mξ^m (g-φ)(u) ≥m/2 - mξ/1+m - 𝐁̅(m/2 + mξ). By Lemma <ref> and then (2) in Lemma <ref>, one obtains 𝐁̅(m/2 + mξ) ≤ e^-m T(1/2+ξ, 1/2)≤ e^-2mξ^2. From (<ref>), 2mξ^2 ∼log (1+w^-1) + log(√(m)). We thus obtain (II) ≥m/2 - mξ/m+1 - K_1/√(m)(1+w^-1) for some constant K_1 = 1+o(1). We then obtain the lower bound for m̃(w). If m →∞, then m̃(w) → 1/2 - ξ∼ 1/2, providing that w is small. Derive the upper bound. Using β(u)φ(u) = g(u) - φ(u) again, the upper bound in (<ref>) can be written as m̃(w) ≤∑_u = m/2^m/2 + mξ2wβ^2(u) φ(u)/1 + wβ(u)_(III) - ∑_u = m/2 + mξ^m2wβ(u) φ(u)/1 + wβ(u)_(IV) + ∑_u = m/2 + mξ^m2wβ(u) g(u)/1 + wβ(u)_(V). We first bound (III). Since β(·) is a monotone increasing function on [m/2, m] (see Lemma <ref>) and -1< β(m/2) < 0 (see Lemma <ref>), we have 1 + wβ(u) ≥ 1 + wβ(m/2) ≥ 1-w for any u ∈ [m/2, m/2 + mξ]. Thus, (III) ≤2 w β(m/2+mξ)/1 + wβ(m/2)∑_u = m/2^m/2+mξβ(u) φ(u) ≤2 w β(m/2+mξ)/1 - w∑_u = m/2^m/2+mξ (g(u) - φ(u) ) ≤2m ξ/(1- w)(1+m)≤2ξ/1-w, as β(m/2 + mξ) = 1/w. Next, we bound (V). Using that 1 ≤2w β(u)/(1+ wβ(u)) ≤ 2 as wβ(u) ≥ 1 for u ≥ m/2 + mξ, we have m/2 - mξ/m+1 = ∑_u=m/2 + mξ^m g(u) ≤ (V) ≤∑_u=m/2 + mξ^m 2 g(u) = m - 2mξ/m+1≤ 1 - 2ξ. By summing up the bounds for (III) and (V), and note that (IV) is smaller than (V) due to φ(u) ≤ g(u) for u ∈ [m/2 + mξ, m], as ξ > |ν_n| by Lemma <ref>, we thus obtain the upper bound. §.§ Upper and lower bounds for m_1(θ, w) (Upper bound for m_1(θ, w)) For m_1(θ, w) given in (<ref>), θ∈ (0, 1), and ξ = |ξ_n(w)| as in (<ref>), ζ = ζ(w) in (<ref>), and ν = |ν_n(w)| as in (<ref>), suppose mξ^4 → 0 as m →∞ and μ = |θ - 1/2|, then there exist a w_0 ∈ (0,1) and a fixed μ_0 < 1/2 such that for any w ≤ w_0 and Λ/√(2m) < μ≤μ_0, Λ > 0 is some fixed constant, we have m_1(θ, w) ≤2/w_θ(m/2 + mξ) + 2/wΦ̅(2√(m)(ξ -|μ|))T_m(μ, w), where T_m(μ, w) = |ξ - μ|/μ√(1-4ξ^2) . If 1/2mξ≤μ≤Λ/√(2m), m_1(θ, w) ≲(1/√(m|ξ - μ|)+1/(m+1)μ) e^-2mμ^2 + 4mμξ/1-4μ^2. If 0 < μ < 1/2mξ, for C_μ = 4μ^2/1-4μ^2, then m_1(μ, w) ≲ 2e^2/1-4μ^2(ζ + w^C_μ/√(m)). In addition, if m is sufficiently large such that ξlog m → 0, then m_1(θ, w) ≲1/w( Φ̅(2√(m) (ξ - μ)/√(1-4μ^2)) + Φ̅(2√(m) (ξ - μ)) T_m(μ, w)). We only consider the case when θ > 1/2, as the proof for θ < 1/2 is very similar, and the result will be the same. Let μ = |θ - 1/2|. We start with splitting the summation in m_1(θ, w) into two terms as follows: m_1(θ, w) = ∑_|u - m/2| > mξβ(u, w) φ_θ(u)_(I) + ∑_|u - m/2| ≤ mξβ(u, w) φ_θ(u)_(II). Using w β(u) > 1 when |u - m/2| > mξ from Lemma <ref>, we have (I) ≤1/w∑_|u - m/2| > mξφ_θ(u) = 1/w(𝐁̅_θ(m/2 + mξ) + _θ (m/2 - mξ)) ≤2/w𝐁̅_θ(m/2 + mξ). We thus obtain the first term in (<ref>). To bound (II), one can write (II) = ∑_|u - m/2| ≤ mξ(β(u)- w β^2(u)/1+wβ(u)) φ_θ(u) ≤∑_|u - m/2| ≤ mξβ(u) φ_θ(u) ≤∑_mν≤ |u - m/2| ≤ mξβ(u) φ_θ(u), as β(u) < 0 when |u - m/2| < mν by Lemma <ref>. (<ref>) can be further bounded by ∑_m ν≤ |u - m/2| ≤ mξg(u) φ_θ(u)/φ(u) = 1/m+1∑_ mν≤ |ũ| ≤ mξφ_θ(ũ + m/2)/φ(ũ+m/2) ≤2/m+1∑_mν≤ũ≤ mξ e^mT(1/2 + ũ/m, 1/2) - m T(1/2+ũ/m, 1/2+μ), where T(a, p) = alog (a/p) + (1-a)log((1-a)/(1-p)). By Lemma <ref>, we have mT(1/2 + ũ/m, 1/2) - m T(1/2+ũ/m, 1/2+μ) = 2ũ^2/m - 2m(μ - ũ/m)^2/1-4μ^2 + mϖ_m(μ, ũ/m) = 4m/1-4μ^2(μũ/m - μ^2/2) - 8 μ^2 ũ^2/m(1 - 4μ^2) + mϖ_m(μ, ũ/m), where ϖ(μ, ũ/m) = (ũ/m)^3 ϵ_1/3(1/2 + ϵ_1)^2(1/2-ϵ_1)^2 - (μ - ũ/m)^3 (μ + ϵ_2 )/3(1/2 + μ + ϵ_2)^2(1/2 - μ -ϵ_2)^2 for some ϵ_1 ∈ [0, ũ/m] and ϵ_2 ∈ [0, μ - ũ/m] if μ≥ũ/m and ϵ_2 ∈ [μ - ũ/m, 0] if μ < ũ/m. As ũ is at most mξ, the first term in (<ref>) is ≲ (ũ/m)^4 ≤ξ^4. We only need to consider the case when the second term is negative (when ũ/(2m) ≤μ≤ũ/m), as otherwise, this term can be simply bounded by 0. Using ũ≤ mξ again, then μ = O(ξ) and the second term is bounded by Cξ^4 for some constant C > 0. Hence, mϖ(μ, ũ/m) ≤ C_1 mξ^4 for some C_1 > C. We thus obtain (II) ≤(<ref>) = 2/m+1∑_mν≤ũ≤ mξ e^4m/1-4μ^2(μũ/m - μ^2/2) - 8μ^2 ũ^2/m(1-4μ^2) + mϖ(μ, ũ/m) ≤2/m+1 e^C_1 mξ^4∑_mν≤ũ≤ mξ e^4m/1-4μ^2(μũ/m - μ^2/2) = 2/m+1 e^- 2m μ^2/1-4μ^2 + C_1 mξ^4×e^4μ m ξ/1-4μ^2 - e^4μ m ν/1-4μ^2/e^4μ/1-4μ^2 - 1 ≤1 - 4μ^2/2(m+1)μ e^- 2m μ^2/1-4μ^2 + 4m μξ/1-4μ^2 + C_1 mξ^4 ≲1/2(m+1)μ e^4mμξ - 2m μ^2/1-4μ^2, which we used the inequality e^x - 1 ≥ x for x > 0 to obtain second inequality in the last display and the assumption m ξ^4 → 0 as m →∞ to obtain the last line. By collecting the relevant bounds obtained above, we arrive at m_1(θ, w) ≲2/w_θ(m/2 + mξ) + 1/2(m+1)μ e^4mμξ - 2mμ^2/1-4μ^2 = (a) + (b). When μ≥Λ/√(2m), using that g(x) = (m+1)^-1 and (g/φ)(m/2 + mξ) ≍ w^-1, we have (b) = g(m/2 + mξ)/μ e^4mμξ - 2mμ^2/1-4μ^2 = φ(m/2 + mξ)/w μϕ(2√(m)(ξ - μ) )/ϕ(2√(m)ξ) e^8mμ^2/1-4μ^2 (ξ^2 - (ξ - μ)^2) ≤2√(m)|ξ - μ|/wμΦ̅(2√(m)(ξ - μ) ) φ(m/2 + mξ)/ϕ(2√(m)ξ) e^8mμ^2/1-4μ^2 (ξ^2 - (ξ - μ)^2). To obtain the last line, we used the inequality ϕ(x) ≤ |x + x^-1| Φ̅(x) ≈ |x| Φ̅(x) when 1/x → 0 (see Lemma <ref>). The exponential term in (<ref>) is at most by e^8m ξ^4/1-4μ^2, which happens when μ = ξ. As mξ^4 = o(1), it is bounded by some constant slightly bigger than 1. Also, by applying Lemma <ref> and then Lemma <ref>, we have φ(m/2 + mξ)/ϕ( 2√(m)ξ) ≤1/√(m(1-4ξ^2)) e^-m T(1/2+ξ, 1/2) + 2mξ^2≤1+o(1)/√(m(1-4ξ^2)). Therefore, (<ref>)≲2|ξ - μ|/w μ√(1-4ξ^2)Φ̅(2√(m)(ξ - μ) ). Thus, one can further bound (<ref>) by m_1(θ, w) ≲ (a) + 2/wΦ̅(2√(m)(ξ - μ)) ( |ξ - μ|/μ√(1-4ξ^2)). We thus obtain (<ref>). When 1/2mξ≤μ < Λ/√(2m), using (g/φ)(m/2 + mξ) ≍ w^-1 again and Φ̅(x) ≤ϕ(x)/x by Lemma <ref>, (a) in (<ref>) can be bounded by 2 g(m/2 + mξ)/φ(m/2 + mξ)ϕ(2√(m)(ξ - μ)/√(1-4μ^2)) ×√(1-4μ^2)/2√(m|ξ - μ|) ≤√(2m)/1+m√(1-4ξ^2/|ξ - μ|) e^-2m(ξ - μ)^2/1-4μ^2 + mT(1/2 + ξ, 1/2) ≤√(2(1-4μ^2)/m|ξ - μ|) e^-2m(ξ - μ)^2/1-4μ^2 + 2mξ^2 + o(1) ≲√(2(1-4μ^2)/m|ξ - μ|) e^4m/1-4μ^2(μξ - μ^2/2). Combining the preceding upper bound with (b) in (<ref>), we arrive at m_1(θ, μ) ≲(√(1-4μ^2/m|ξ - μ|)+1/(m+1)μ) e^4m/1-4μ^2(μξ - μ^2/2). Since 1 - 4μ^2 ≤ 1, we obtain the result in (<ref>). Last, if 0 < μ < 1/2mξ, then (I) ≤2/w_θ(m/2 + mξ) ≤2/w e^- mT(1/2 + ξ, 1/2 + μ)≤2/w e^- 2m (μ - ξ)^2/1-4μ^2 + C_4 mξ^4 ≤2/we^- 2m (μ - ξ)^2/1-4μ^2≤2/we^- 2mξ^2 + 2/1-4μ^2≤2e^2/1-4μ^2/√(m)w^C_μ, where C_μ = 4μ^2/1-4μ^2. We used the fact that 2mξ^2 ≍log (1/w) + log(√(m)) to obtain the last inequality in the last display. From (<ref>), we also have (II) ≤2e^C_1 mξ^4/m+1∑_m ν≤ũ≤ mξ e^4μũ - 2mμ^2/1-4μ^2≤2 e^4mμν/1-4μ^2 + C_1 mξ^4/m+1∑_0≤ũ≤ mζ e^4μũ/1-4μ^2 ≲2mζ/m+1 e^4mμ (ν + ζ)/1-4μ^2≤ 2ζ e^2/1-4μ^2, as μ < 1/(2mξ). We used the fact that ξ - ν≤ζ to obtain the second inequality. (<ref>) is obtained by combining the upper bounds for (I) and (II). To prove (<ref>), let's write 𝐁_θ(m/2 + mξ) = P_θ(u ≥ m/2 + mξ) for u ∼Bin(m, θ) and denote ũ =u - m/2, Z_m = 2ũ - 2mμ/√(1 - 4μ^2), and W_m as the standard Brownian motion, then P_θ(u ≥ m/2 + mξ) = 𝔼_θ( 1{Z_m ≥2 mξ - 2mμ/√(1 - 4μ^2)}1{|Z_m - W_m| ≥log m + x} + 1{Z_m ≥2 mξ - 2mμ/√(1 - 4μ^2)}1{|Z_m - W_m| < log m + x}) = (c) + (d). By applying the KMT approximation theorem in Lemma <ref> (w.l.o.g we choose C = 1 in the lemma), (c) ≤ P_θ(|Z_m - W_m| ≥log m + x) ≲ e^-K x for some positive constant K. Also, we have (d) ≤ P (W_m ≥2mξ - 2mμ/√(1-4μ^2) - (log m + x) ) = P (W_m/√(m)≥2√(m)(ξ - μ)/√(1-4μ^2) - log m + x/√(m)) = Φ̅( 2√(m)(ξ - μ)/√(1-4μ^2) - log m + x/√(m)). Using the lower bound in (iii) in Lemma <ref>, let δ = (log m + x)^2/(2m) and z = 2√(m)(ξ - μ)/√(1-4μ^2) - δ, then Φ̅(z) ≤Φ̅(z + δ) exp(ρ(z) δ + δ^2/2), where ρ(z) = ϕ(z)/Φ̅(z) and by Lemma <ref>, ρ(z) ≤ z + 1/z. By plugging-in the expressions for z and δ, the last display can be bounded by Φ̅(2√(m)(ξ - μ)/√(1-4μ^2)) e^2(ξ - μ) (log m + x)/√(1-4μ^2) - (log m + x)^2/2m + (log m + x)^2/m(2√(m)(ξ - μ)/√(1-4μ^2) - (log m + x)^2/m)^-1. Choosing x = log m, then log m + x/√(m) = 2log m/√(m)= o(1) and (2√(m)(ξ - μ)/√(1-4μ^2) - (log m + x)^2/m)^-1→ 0 as m →∞, thus (c) + (d) ≲ e^-Klog m + Φ̅( 2√(m)(ξ - μ)/√(1-4μ^2)) exp( 4(ξ - μ) log m/√(1-4μ^2) -o(1) ). We need the assumption ξlog m → 0 when ξ > μ, as otherwise, the second term in the last display can goes to infity. The result then follows by using the assumptions e^-Klog m→ 0 and ξlog m → 0. (Lower bound for m_1(θ, w)) For m_1(θ, w) given in (<ref>) with θ∈ (0, 1), let ξ = |ξ_n| and ν = |ν_n| for ξ_n and ν_n in (<ref>) and (<ref>) respectively, suppose m ξ^4 → 0 as m →∞, then for any w ≤ w_0 ∈ (0, 1) and μ = μ_0 ≥Λ/√(2m), μ = |θ - 1/2|, for some positive constant Λ and μ_0 <1/2, m_1(θ, w) ≳1/w( _θ(m/2 + mξ) + Φ̅( 2√(m)(ξ - μ)/√(1-4μ^2)) T'_m(μ, w)) where for C_μ = e^4μ/1-4μ^2 - 1 and ξ := ξ(w), T'_m(μ, w) = |ξ - μ|/C_μ√(1-4μ^2). Moreover, for a sufficiently large m, we have m_1(θ, w) ≳1/w( Φ̅( 2√(m)(ξ - μ)/√(1-4μ^2)) (1+ T'_m(μ, w)) ), if ξ > μ, 1/w( Φ̅( √(m)(ξ - μ)/√(1/2 + μ))+ Φ̅( 2√(m)(ξ - μ)/√(1-4μ^2)) T'_m(μ, w) ) if ξ≤μ. It is sufficient to consider θ > 1/2, as the proof of the case θ < 1/2 is similar. We write m_1(θ, w) as a summation of three terms given by m_1(θ, w) = ∑_|u - m/2| ≥ mξβ(u, w) φ_θ(u)_(I) + ∑_mν < |u - m/2| < mξβ(u, w) φ_θ(u)_(II) + ∑_|u - m/2| ≤ mνβ(u, w) φ_θ(u)_(III) Since 1+wβ(u) ≤ 2wβ(u) if |u - m/2| ≥ mξ, we have (I) ≥1/2w∑_|u - m/2| > mξφ_θ(u) = 1/2w(𝐁̅_θ (m/2 + mξ) + _θ(m/2 - mξ)) ≥1/2w𝐁̅_θ (m/2 + mξ). Next, by Lemma <ref>, 0 ≤ wβ(u) ≤ 1 for u ∈ [m/2 + mν, m/2 + m ξ], and by Lemma <ref>, β(m/2) > -1, thus (II) = ∑_mν < |u - m/2| < mξβ(u, m) φ_θ(u) ≥∑_mν < |u - m/2| < mξβ(u)/1 + wβ(m/2)φ_θ(u) ≥∑_mν < |u - m/2| < mξg(u) φ_θ(u)/(1-w) φ(u) - 1/1-w∑_mν < |u - m/2| < mξφ_θ(u) = (a) + (b). Since g(u) = (1+m)^-1, (a) = 1/(1+m)(1-w)∑_mν < |u - m/2| < mξφ_θ(u)/φ(u). Let T(a, p) = alog (a/p) + (1-a) log ((1-a)/(1-p)) and ũ = u - m/2, then the ratio φ_θ(u)/φ(u) = exp(m T(1/2+ũ/m, 1/2) - mT(1/2+ũ/m, 1/2 + μ)). By (2) in Lemma <ref>, mT(1/2 + ũ/m, 1/2) ≥ 2 m (ũ/m)^2. If ũ/m > μ, then by (3) in Lemma <ref>, mT(1/2+ũ/m, 1/2 + μ) ≤2m (μ - ũ/m)^2/1-4μ^2 + 16 (ũ/m - μ)^3 ũ/3(1-4(ũ/m)^2)^2. Since ν < ũ/m < ξ, the second term in the last display is Km ξ^4 for some constant K > 16/3. If 0 < ũ/m < μ, then by (3) in Lemma <ref>, mT(1/2+ũ/m, 1/2 + μ) ≤2m (μ - ũ/m)^2/1-4μ^2. Therefore, by combining the relevant bounds, we obtain (<ref>)≥exp( 2 m (ũ/m)^2 - 2m (μ - ũ/m)^2/1-4μ^2 - Kmξ^4 ). Note that the first two terms can be understood as the ratio between two Gaussian distributions. Their means and variables correspond to the means and variables of two binomial distributions φ_θ(u) and φ(u). Using the lower bound in the last display and noticing that (1-w)^-1≥ 1/2 for a sufficiently large n, we have (<ref>) ≥1/2(m+1)∑_mν < ũ <mξ e^4m/1-4μ^2(μũ/m - μ^2/2) - 8 μ^2ũ^2/m(1-4μ^2) - Kmξ^4 ≥1/2(m+1) e^- 2mμ^2/1-4μ^2 - 8mμ^2 ξ^2/1-4μ^2 - Kmξ^4∑_mν < ũ <mξ e^4μũ/1-4μ^2 = 1/2(m+1) e^- 2mμ^2/1-4μ^2 - 8mμ^2 ξ^2/1-4μ^2 - Kmξ^4×e^4mμξ/1-4μ^2 - e^4mμν/1-4μ^2/e^4μ/1-4μ^2 - 1 = 1/2 C_μ (m+1) e^- 2mμ^2/1-4μ^2 - 8mμ^2 ξ^2/1-4μ^2 - Kmξ^4( e^4mμξ/1-4μ^2 - e^4mμν/1-4μ^2) ≥1/4 C_μ (m+1) e^- 2mμ^2 - 4mμξ/1-4μ^2 - 8mμ^2 ξ^2/1-4μ^2 - Kmξ^4, for C_μ = e^4μ/1-4μ^2 - 1. Using g(u) = (1+m)^-1 and (g/φ)(m/2 + mξ) ≍ 1/w, by the assumption mξ^4 → 0, then for a sufficiently large m, e^-Kmξ^4≥ 1/2, then the last line in the last display can be written as φ(m/2 + mξ)/2wC_μϕ(2√(m)(ξ - μ)/√(1-4μ^2))/ϕ(2√(m)ξ/√(1-4μ^2)) e^-8mμ^2ξ^2/1-4μ^2 ≥√(m)|ξ - μ|/w C_μ√(1-4μ^2)Φ̅( 2√(m)(ξ - μ)/√(1-4μ^2)) φ(m/2 + m ξ)/ϕ( 2√(m)ξ/√(1-4μ^2)) e^-8mμ^2ξ^2/1-4μ^2, which we used the inequality ϕ(x) ≥Φ̅(x) x for any x > 0 by Lemma <ref>. Applying Lemma <ref> and then (2) in Lemma <ref>, we then obtain φ(m/2 + m ξ)/ϕ( 2√(m)ξ/√(1-4μ^2)) e^-8mμ^2ξ^2/1-4μ^2≳e^-mT(1/2 + ξ, 1/2) + 2mξ^2/√(m(1-4ξ^2))≥1/√(m(1-4ξ^2))≥1/√(m). Thus, (a) ≳Φ̅( 2√(m)(ξ - μ)/√(1-4μ^2)) |ξ - μ|/wC_μ√(1-4μ^2). We will bound (b) and the (III) in (<ref>) together, as they are both negative. Using 0 > β(u) > -1 for |u - m/2| < mν_n, then |(b) + (III)| ≤1/1 + wβ(m/2)∑_|u - m/2| ≤ mνφ_θ(u) ≤1/1-w∑_|u - m/2| ≤ mνφ_θ(u) ≤1/1-w. By combining lower bounds for (I), (II), and (III) and note that w/(1-w) → 0 as n →∞, we obtain (<ref>). To prove (<ref>), it is sufficient to obtain a lower bound for 𝐁̅_θ (m/2 + mξ). If μ≥ξ, then by Lemma <ref>, we immediately obtain 𝐁̅_θ (m/2 + mξ) ≥Φ̅( √(m)(ξ - μ)/√(1/2 + μ)). If 0 < μ < ξ, then by Lemma <ref>, since m ξ^2 →∞, as m →∞, let σ = √(m(1-4μ^2))/2 and Y(z) = Φ̅(z)/ϕ(z) with z = 2 √(m)(ξ - μ)/√(1-4μ^2), we obtain 𝐁̅_θ (m/2 + mξ) = σΦ̅(2√(m)(ξ - μ)/√(1 - 4μ^2)) φ_θ(m-1; mξ+m/2 - 1)/ϕ(2√(m)(ξ - μ)/√(1 - 4μ^2)) e^|ξ - μ|/m. By Lemma <ref> and then (2) in Lemma <ref>, φ_θ(m-1; mξ+m/2 - 1)/ϕ(2√(m)(ξ - μ)/√(1 - 4μ^2)) = 1/2 + ξ/1/2 + μ×φ_θ(mξ+m/2)/ϕ(2√(m)(ξ - μ)/√(1 - 4μ^2)) > 2/√(m(1-4ξ^2)) e^-mT(1/2 + ξ, 1/2+μ) + 2m(ξ - μ)^2/1-4μ^2 ≥2/√(m(1-4ξ^2)) e^- K mξ^4, for some K > 16/3. Therefore, 𝐁̅_θ (m/2 + mξ) ≥2 σ/√(m(1-4ξ^2)) e^- K mξ^4 + |ξ - μ|/mΦ̅(2√(m)(ξ - μ)/√(1 - 4μ^2)) ≳Φ̅(2√(m)(ξ - μ)/√(1 - 4μ^2)) by plugging-in the expression of σ and using |ξ - μ|/m → 0 and the assumption mξ^4 → 0. Results in Lemmas <ref> and <ref> lead to the following corollary: For m_1(θ, w) given in (<ref>), θ∈ (0, 1), and ξ = |ξ_n| and ν = |ν_n| for ξ_n and ν_n in (<ref>) and (<ref>) respectively, suppose m ξ^4 → 0, w → 0, as m →∞, then for any 1/2 > μ = μ_0 ≥Λ/√(2m), μ = |θ - 1/2| and Λ > 0 and some positive constant C, m_1(θ, w) ≳1/w( _θ(m/2 + mξ) + Φ̅(2√(m)(ξ - |μ|)/√(1-4μ^2)) T_m'(μ, w) ), m_1(θ, w) ≲1/w( _θ(m/2 + mξ) + Φ̅(2√(m)(ξ - |μ|) T_m(μ, w) ). For m_1(θ, w) given in (<ref>), θ∈ (0, 1), and ξ = |ξ_n| given in (<ref>), suppose m ξ^4 → 0, w → 0, as m →∞, then for any 1/2 > μ≥ (1 + ρ) ξ(w) with any w ≤ w_0 ∈ (0, 1) and ρ > 0, there exists a ε∈ (0, 1) such that m_1(θ, w) ≥(1-ε)/w. Let a = 1 + ρ/2, for w is small enough, we can write wm_1(μ, w) ={∑_ũ = - amξ^amξ + ∑_|ũ| ≥ amξ}wβ(ũ + m/2)/1 + w β(ũ + m/2)φ_θ (ũ + m/2) ≥∑_|ũ| ≥ amξwβ(ũ + m/2)/1 + w β(ũ + m/2)φ_θ (ũ + m/2) - ∑_ũ = - amξ^amξφ_θ (ũ + m/2) ≥wβ(a m ξ + m/2)/1 + w β(a m ξ + m/2)φ_θ (amξ + m/2) - _θ(amξ). Since μ≥ (1+ρ)ξ, we have φ_θ (amξ + m/2) → 1 when w → 0. If w β(amξ + m/2 ) →∞, then the first term in the last display → 1. Let us denote the second term in the last display as ε, we then complete the proof. What remains is to show w β(amξ + m/2 ) →∞. Since 1/w = β(mξ + m/2 ), β(u) ≍ (g/ϕ)(u) and g(u) = (1+m)^-1, using Lemma <ref> and then (2) in Lemma <ref>, we obtain β(amξ + m/2 )/β(mξ + m/2 )≍φ(mξ + m/2)/φ(amξ + m/2)≳ e^m T(1/2 + aξ, 1/2) - mT(1/2 + ξ, 1/2)≥ e^2(a-1)mξ^2 + o(1). Since a > 1, the lower bound in the last display goes to ∞ as m→∞. §.§ Comments on the bounds for m̃(w) and m_1(θ, w) We briefly comment on the differences in the bounds for m̃_0(w) and m_1(θ, w) between our model and the Gaussian sequence model studied in , as these bounds play a crucial role in bounding ŵ. Since our prior is conjugate and g(x) is a constant, bounding m̃_0(w) is relatively straightforward compared to the Gaussian sequence model, which uses a heavy tail distribution, so its posterior is intractable. However, bounding m_1(θ, w) becomes more challenging in our model as it involves dealing with non-centered binomial distributions with a parameter different from 1/2. In the Gaussian sequence model, the upper and lower bounds for m_1(θ, w) are of the same order. In our model, the bounds, as given in Lemma <ref>, have a different variance in the normal cdf part of each bound. The difference in variances arises when bounding the ratio between two binomial distributions with θ = 1/2 and θ≠ 1/2. The change of the parameter value in a binomial distribution affects both its mean and variance, but it only affects the mean if the distribution is Gaussian. In the proof, we generally do not recommend approximating a non-centered binomial distribution with a Gaussian, especially when its parameter is further away from 1/2. As indicated by the upper bound in (<ref>) in Lemma <ref>, an additional price needs to pay to control the approximation error between the two distributions. Since ξ∼√(log(1/w) + log√(m)/2m), the condition ξlog m → 0 implies log(1/w)(log m)^2/m→ 0 and (log m)^3/m → 0, which are stronger than m ≫ (log n)^2 used in proving the main results. §.§ Upper bound for m_2(θ, w) Consider m_2(θ, w) as in (<ref>) and let ξ = |ξ_n| and ν = |ν_n| for ξ_n and ν_n given in (<ref>) and (<ref>) respectively, suppose mξ^4 = o(1), w ≍ s_n/n and s_n = n^v_1 and m = n^v_2 for v_1∈ (0,1), then for any any (0, 1) ∋θ≠ 1/2, m_2(θ, w) ≤2/w^2( _θ(m/2 + mξ) + 4√(2)Φ̅(2√(m) (ξ - |μ|)) ). We split m_2(θ, w) into three parts as follows: m_2(θ, w) = ∑_|u - m/2| ≥ mξβ(u, w)^2 φ_θ(u) + ∑_mν < |u - m/2| < mξβ(u, w)^2 φ_θ(u) + ∑_|u - m/2| ≤ mνβ(u, w)^2 φ_θ(u) = (a) + (b) + (c). As β(u) ≥ 1/w on {u: |u - m/2| ≥ mξ}, one obtains (a) ≤1/w^2∑_|u - m/2| ≥ mξφ_θ(u) ≤2/w^2_θ(m/2 + mξ). Since β(u) ≤ 0 and β(u)^2 ≤ 1, we immediately obtain (c) ≤∑_|u-m/2| ≤ mνβ(u)^2 φ_θ(u)/(1-wβ(m/2))^2≤1/(1-w)^2∑_|u-m/2| ≤ mνφ_θ(u) < 1/(1-w)^2. Last, using that 0 < wβ(u) < 1 for {u:|u - m/2| ∈ (m ν, mξ)}, (b) ≤2/w(1-w)^2∑_mν < |u-m/2| < mξβ(u) φ_θ(u) < 8/w∑_mν < |u - m/2| < mξg(u)φ_θ(u)/φ(u), which we used 1 - w > 1/2 for a sufficiently large n. Using the same argument as in the proof of Lemma <ref>, the summation part in the last display can be bounded by (<ref>), and thus, we have (b) ≤8/w∑_ũ = mν^mξ g(u) φ_θ(u)/φ(u)≤8|ξ - μ|/w^2 μ√(m(1-4ξ^2))Φ̅(2√(m)(ξ - μ)). If μ≥ξ/K for some fixed K > 0, then (b) ≤8√(2)K/w^2 √(m)Φ̅(2√(m)(ξ - μ)), which we used 1 - 4ξ^2 > 1/2, as ξ→ 0 as m →∞. Next, consider 1/2mξ≤μ≤ξ/K, then the ratio |ξ - μ|/μ is at most 2mξ^2. Then, (<ref>) is bounded by 16mξ^2/w^2√(m(1-4ξ^2))Φ̅(2√(m)(ξ - μ)) ≤8√(2)/w^2Φ̅(2√(m)(ξ - μ)), as 2mξ^2 ∼log (n/s_n) + log(√(m)) ∼log n ≪√(m). Last, consider 0 < μ < 1/2mξ, (<ref>) in the proof of Lemma <ref> shows that ∑_mν≤ |u - m/2| ≤ mξβ(u) φ_θ(u) ≲ 2ζ e^2/1-4μ^2→ 0, as ζ→ 0. By combining the three cases considered above and summing up the bounds for (a), (b) and (c), using that w → 0 as n →∞, we obtain the upper bound for m_2(θ, w). Consider m_1(θ, w) and m_2(θ, w) as in (<ref>) and (<ref>) respectively, let ξ = |ξ_n| and ν = |ν_n| for ξ_n and ν_n given in (<ref>) and (<ref>) respectively, suppose mξ^4 = o(1), w ≍ s_n/n and s_n = n^v_1 and m = n^v_2 for v_1∈ (0,1), then for any (0, 1) ∋θ≠ 1/2, m_2(θ, w) ≲m_1(θ, w)/w. §.§ Controlling m_1(θ ,w) on the set containing relatively small signals Consider the following set: 𝒥_0 := 𝒥(θ_0, w, K) = {1 ≤ j ≤ n: |θ_0,j - 1/2| ≥ξ(w)/K}, which is a subset of 𝒮_0 = {1≤ j≤ n: θ_0,j≠ 1/2}. Define the following two quantities: ℳ^𝒮_0(w) = ∑_j∈𝒮_0 m_1(θ_0, j, w), ℳ^𝒥_0(w, K) = ∑_j∈𝒥_0 m_1(θ_0, j, w), and we will bound the difference between the two quantities in the next lemma. This bound is essential to obtain the uniform FDR control results in Section <ref> and a tight concentration bound for the MMLE ŵ. Consider the set 𝒥_0 given in (<ref>), suppose s_n ≤ n^v_1 and m ≥ n^v_2 for some fixed v_1 ∈ (0, 1) and v_2 ≥loglog n/log n, then there exists a constant D > 0 depending on v_1, v_2 and some fixed constants Λ and a constant K > (1 - √(v_1/1 + v_1 + v_2/2))^-1 such that for a sufficiently large n, we have sup_θ_0 ∈ℓ_0[s_n]sup_w ∈ [1/n, 1/log n] |ℳ^𝒮_0(w) - ℳ^𝒥_0(w, K)| ≲ n^1-D. To start, allow us to slightly abuse the notation and denote 𝒥_0^c such that 𝒥_0^c ∪𝒥_0 = 𝒮_0, 𝒥_0^c = {1 ≤ j ≤ n: 0 < |θ_0,j - 1/2| < ξ(w)/K}. Let μ_0,j = θ_0,j - 1/2, one can write ∑_j ∈𝒥_0^c m_1(θ_0, j, w) = {∑_0 < |μ_0,j| < 1/2mξ + ∑_1/2mξ≤ |μ_0,j| ≤Λ/√(2m) + ∑_Λ/√(2m) < |μ_0,j| ≤ξ(w)/K} m_1(θ_0,j, w) = (I) + (II) + (III). First, we bound (I). By Lemma <ref>, using the fact that |𝒥_0^c| ≤ |𝒮_0| ≤ s_n and C_μ_0,j→ 0 for any μ_0,j < 1/2mξ, as C_μ_0,j = 4μ_0,j^2/1-4μ_0,j^2→ 0, let C̃ = max_j C_μ_0,j and C_1 = max_jexp(2/1-4μ_0,j^2), we thus obtain (I) ≲ 2C_1 s_n (ζ(w) + w^C̃/√(m)) ≤ 2 C_1 (n^v_1 - v_2/2√(log n) + n^ v_1 - v_2/2 - C̃ (loglog n/log n)) ≤ 4C_1 n^v_1 - v_2/2 + loglog n/(2log n), as s_n = n^v_1, ζ(w) = √(-log w/(2m)), 1/n ≤ w ≤ 1/log n, and m = n^v_2. Next, we bound (II). By Lemma <ref> again and using that (2mξ)^-1≤μ_0,j≤Λ/√(2m) and √(2m)ξ(w) ∼√(log (1/w) + log√(m)), we obtain (II) ≲ s_n max_j( 1/√(m |ξ(w) - μ_0,j|) + 1/(m+1)μ_0,j) e^-2m(|μ_0,j| - ξ)^2 + 2mξ^2 ≤ 4s_n ξ(w) e^2Λ√(2m)ξ(w)≤√(2 log n)C(v_1, v_2) n^v_1 - v_2/2 e^Λ C(v_1, v_2) √(log n) = √(2) C(v_1, v_2) n^v_1 - v_2/2 + Λ C(v_1, v_2)/√(log n) - loglog n/(2log n) ≤√(2)C(v_1,v_2) n^1-(1-v_1 + v_2/2 -Λ C(v_1,v_2)/√(log n)), where C(v_1, v_2) = 2√(1 - v_1 + v_2/2). As n →∞, Λ C(v_1,v_2)/√(log n)→ 0. Last, for Λ/√(2m) < μ_0,j≤ξ(w)/K, T_m(μ_0,j, m) ≤ (1 - K^-1) √(2m)ξ(w)/Λ. Using that s_n /w ≤ n^1+v_1, we obtain (III) ≤ 2n max_j (_θ_0 (m/2 + mξ(w)) + Φ̅(2√(m)(ξ(w) - |μ_0,j|) ) T_m(μ_0,j, m) ) ≤ 2n max_j ( e^-mT(1/2 + ξ, 1/2 + |μ_0,j|) + 2 Λ^-1 (1-K^-1) √(2m)ξ(w) e^- 2m(ξ - |μ_0,j|)^2 ) ≤ 4 C(Λ, K, v_1, v_2) n^1+v_1√(log n) e^-2m(ξ - |μ_0,j|)^2, as mT(1/2 + ξ, 1/2 + |μ_0,j|) ≤ 2m(ξ - |μ_0,j|)^2 + 6m ξ^4 → 2m(ξ - |μ_0,j|)^2 by (3) in Lemma <ref>, as m ξ^4 → 0 by assumption. Let C_2 = C(Λ, K, v_1, v_2), then the upper bound in the last display can be bounded by 4 C_2 n^1+v_1√(log n) e^- 2m(1-1/K)^2 ξ^2≤ 4C_2 n^(1 + v_1)(1- (1-1/K)^2) + loglog n/(2log n) - v_2(1-1/K)^2/2. Combining the above upper bounds for (I), (II), and (III), for a sufficiently large n, (<ref>)≲ n^1-(1-v_1 + v_2/2) + n^1 + v_1 - (1+v_1+v_2/2)(1-1/K)^2. Taking D = min{1 - v_1 +v_2/2, (1+v_1+v_2/2)(1-1/K)^2 - v_1}, if K > (1 - √(v_1/1 + v_1 + v_2/2))^-1, then D > 0, providing that v_1 is bounded away from 1 (this is true as we assume w ≤ 1/log n). Thus, (<ref>)≲ n^1-D. Consider the set 𝒥_0 given in (<ref>), suppose s_n ≤ n^v_1 and m ≥ n^v_2 for some fixed v_1 ∈ (0, 1) and v_2 ≥loglog n/log n, then for a sufficiently large K > A > 1 and any w ∈ [n^-1, (log n)^-1], if n is sufficiently large, then ℳ^𝒥_0(w/A, K) ≥ Kℳ^𝒥_0(w, K) Recall the definition of ℳ^𝒥_0(w, K) = ∑_j ∈𝒥_0 m_1(θ_0,j, w). By the lower bound of m_1(·, w) in Lemma <ref>, we have ℳ^𝒥_0(w/A, K) ≥A/w∑_j ∈𝒥_0( _θ_0,j (m/2 + mξ(w/A)) + Φ̅( 2√(m)(ξ(w/A) - |μ_0,j|)/√(1-4μ^2_0,j)) T_m'(μ_0,j, w/A) ) = (a) + (b). We need to obtain a lower bound for (a) and (b) respectively. For (a), using that ξ(w) < ξ(w/A) as long as A > 1 and |θ_0,j - 1/2| = |μ_0,j| ≥ξ(w)/K for each j ∈𝒥_0, we have _θ_0,j (m/2 + mξ(w/A)) = _θ_0,j (m/2 + mξ(w)) - ∑_|ũ| = mξ(w)^mξ(w/A)φ_θ_0,j(m/2 + |ũ|). By plugging-in the expression of the density function of a binomial distribution, the second term in the last display can be written as ∑_|ũ| = mξ(w)^mξ(w/A)φ_θ_0,j(m/2 + |ũ|) = ∑_|ũ| = mξ(w)^mξ(w/A)m m/2 + |ũ|θ_0,j^m/2 + |ũ| (1-θ_0,j)^m/2 - |ũ|. By Lemma <ref>, the last display equals to ∑_|ũ| = mξ(w)^mξ(w/A)√(2)/√(π m(1-4(ũ/m)^2)) e^- mT(1/2 + |ũ|/m, 1/2) + mT(1/2 + |ũ|/m, 1/2 + |μ_0,j|) + o(1). By Lemma <ref> and using |μ_0,j| ≥ξ(w)/K, the last display is bounded by ∑_|ũ| = mξ(w)^mξ(w/A)√(2) e^- 2 m (|ũ|/m)^2 + Cm (ũ/m)^4/√(π m(1-4(ũ/m)^2))≤√(2)e^-2m ξ^2(w) - C mξ^4(w)/√(π m(1-4 ξ^2(w)))≤C' e^-2mξ^2(w)/√(m) , where C > 16/3 is a fixed constant and C' = √(2) e^-o(1)/√(π m (1-4ξ^2(w/A))) as m ξ^4(w) = o(1) by assumption. Therefore, (a) ≥1/w∑_j ∈𝒥_0_θ_0,j (m/2 + mξ(w/A)) - e^-2mξ^2(w)/√(m) w. Since 2mξ^2(w) ∼log (1/w) + log (m)/2, the second term in the last display is O(1/m) = o(1). Next, we derive a lower bound for (b) in (<ref>). By Lemma <ref>, as A > 1, we have T_m(μ_0,j, w/A) ≥1/2T_m(μ_0,j, w), and thus T_m'(μ_0,j, w/A) = C_μ_0,j/μ_0,j√(1-4μ_0,j^2/1-4ξ^2) T_m(μ_0,j, w/A) ≥C_μ_0,j/2μ_0,j√(1-4μ_0,j^2/1-4ξ^2) T_m(μ_0,j, w), where C_μ≤exp(4μ/1-4μ^2) - 1. For a sufficiently large m, 1-4ξ^2 ≤ 3/4, let K_μ_0,j = C_μ_0,j/μ_0,j√(1-4μ_0,j^2/3), then the last display implies T_m'(μ_0,j, w/A) ≥ K_μ_0,j T_m(μ_0,j, w). In addition, by Lemma <ref>, H_μ_0,j(w/A) ≥ A^1/(4K_0) H_μ_0,j(w), where H_μ = 1/wΦ̅(2√(m)(ξ(w) - |μ|/√(1-4μ^2)) and a fixed K_0 ≥√(2)/4. Therefore, (b) ≥A^1/(4K_0)/w∑_j ∈𝒥_0 K_μ_0,jΦ̅( 2√(m)(ξ(w) - |μ_0,j|)/√(1-4μ^2_0,j)) T_m(μ_0,j, w) ≥A^1/(4K_0)/2wK_μ∑_j ∈𝒥_0Φ̅( 2√(m)(ξ(w) - |μ_0,j|)/√(1-4μ^2_0,j)), where K_μ = min_j K_μ_0,j. By combining the lower bounds of (a) and (b), the result follows by letting K = A ∨K_μA^1+1/(4K_0)/2. Consider T_m(μ, w) in (<ref>), for any w ∈ (0, 1) and μ_0 > μ≥ξ(w)/K_0 μ_0 < 1/2, there exists a w_0 = w_0(K_0, z) such that for all w ≤ w_0, z > 1, and μ≥ξ(w)/K_0, we have T_m(μ, w/z) ≥T_m(μ, w)/2. By the definition of T_m(μ, w), we have |ξ(w/z) - μ|/μ√(m(1-4ξ^2(w/z))) ≥|ξ(w) - μ|/μ√(m(1-4ξ^2(w/z))) - |ξ(w) - ξ(w/z)|/μ√(m(1-4ξ^2(w/z))) ≥|ξ(w) - μ|/μ√(m(1-4ξ^2(w))) - |ξ(w) - ξ(w/z)|/μ√(m(1-4ξ^2(w/z))), as ξ(w/z) ≥ξ(w) for any z > 1. Since ξ(u) ∼√(1/2m (log u^-1 + log√(m))), ξ(w/z) ∼√(log z/2m + ξ^2(w)). Using that μ≥ξ(w)/K_0, the second term in the last line is bounded by K_0 (√(log z/2m + ξ^2(w)) - ξ(w))/ξ(w) √(m(1-4ξ^2(w)))≤K_0 (√(log z/log (1/w)))/√(m(1-4ξ^2(w)))→ 0, as m →∞. Then for a sufficiently large m, T_m(μ, w/z) ≥|ξ(w) - μ|/2μ√(m(1-4ξ^2(w))) = T_m(μ, w)/2. Consider the function H_μ(w) = 1/wΦ̅(2√(m)(ξ(w) - μ)/√(1-4μ^2)), for any w ∈ (0, 1) and μ_0 > μ≥ξ(w)/K_0 μ_0 < 1/2, there exists a w_0 = w_0(K_0, z) such that for any w ≤ w_0, z > 1, and μ≥ξ(w)/K_0, then there exists K_0 ≥√(2)/4 such that H_μ(w/z) ≥ z^1/(4K_0) H_μ(w). The proof is inspired by the proof of Lemma 19 of , but needs to make a substantial modification for dealing with ξ(w). Let Υ(u) = log H_μ(e^-μ), then the goal is to show the following inequality: Υ(log (z/w)) - Υ(log (1/w)) ≥1/2K_0(log( z/w) - log (1/w)). By the mean-value theorem, it is then sufficient to show that Υ'(u) ≥ 1/(2K_0) for any u ∈ [log 1/w, log z/w]. Note that ξ'(w) = - 1/mw^2 β'(m/2 + mξ(w)). Thus, we have Υ'(u) = 1 - 2 e^u /β'(m/2 + mξ(e^-u)) √(m(1-4μ^2))ϕ/Φ̅(2√(m) (ξ(e^-u) - μ)/√(1-4μ^2)). Also, we have β'(x) = (β(x) + 1) ( Ψ(x+1) - Ψ(m-x+1) ) := (β(x) + 1)Q(x), where Ψ(x+1) = d/dxlogΓ(x+1), Γ(·) is the gamma function, and using that β(m/2 + mξ(w)) = 1/w by Lemma <ref>, we have β'(m/2 + mξ(e^-u)) = (β(m/2 + mξ(e^-u)) + 1)Q(m/2 + mξ(e^-u)) = Q(m/2 + mξ(e^-u)) (e^-u + 1). By plugging-in the above expression into (<ref>) and let C_m(μ) = 2/√(m(1-4μ^2)), one obtains Υ'(u) = 1 - C_m(μ) e^u/(1 + e^u)Q(m/2 + mξ(e^-u))ϕ/Φ̅(2√(m) (ξ(e^-u) - μ)/√(1-4μ^2)). One needs to further bound the function Q(·). Using the mean-value theorem again, then, there exist ξ^⋆∈ [-ξ, ξ] such that Q(m/2 + mξ(x)) = Ψ(m/2 - mξ(x) + 1) - Ψ(m/2 + mξ(x) +1) = 2mξ(x) Ψ'(m/2 + mξ^⋆(x)+ 1). Using Stirling's approximation, Γ(x + 1) ∼√(2π) e^(x + 1/2)log x - x for a sufficiently large x. We thus have Ψ(x+1) ∼log x + 1/2x, and Ψ'(x+1) ∼1/x - 1/2x^2. Therefore, there exists a sufficiently large u such that Q(m/2 + mξ(e^-u)) ∼ 4ξ(e^-u). By plugging this bound into (<ref>), one then arrives at Υ'(u) = 1 - C_m(μ) e^u/4ξ(e^-u)(1 + e^u)ϕ/Φ̅(2√(m) (ξ(e^-u) - μ)/√(1-4μ^2)). Since the map u → e^u(1+e^u)^-1 has limit 1 as u, m →∞, for large enough u, m, e^u(1+e^u)^-1≤ 1 + ϵ for some ϵ > 0 to be specify later. Using the lower bound in Lemma <ref>, if μ < ξ(e^-u) - 1, then C_m(μ)/4ξ(e^-u)ϕ/Φ̅(2√(m) (ξ(e^-u) - μ)/√(1-4μ^2)) ≤C_m(μ)/4ξ(e^-u) 1 + 4 m (ξ(e^-u) - μ)^2/1-4μ^2/2√(m) (ξ(e^-u) - μ)/√(1-4μ^2) = 1/4mξ(e^-u)(ξ(e^-u) - μ) + ξ(e^-u) - μ/ξ(e^-u)(1-4μ^2). The first term in the last display → 0 as m→∞. Using the assumption ξ(w) -1 > μ≥ξ(w)/K_0 for a sufficiently large K_0, the second term in the last display is bounded by ξ(e^-u) - μ/(1-4ξ^2(e^-u))ξ(e^-u)≤1 + ϵ/1-4ξ^2(w)(1 - 1/K_0). When μ≥ξ(e^-u) - 1, C_m(μ)/4ξ(e^-u)ϕ/Φ̅(2√(m) (ξ(e^-u) - μ)/√(1-4μ^2)) ≤ϕ(0)/2√(m)ξ(e^-u) √(1-4μ^2)(Φ̅(2√(m)/√(1-4μ^2)))^-1. Using Lemma <ref> again, Φ̅(2√(m)/√(1-4μ^2)) ≥2√(m)/√(1-4μ^2)(1 + 4m/1-4μ^2)^-1ϕ( 2√(m)/√(1-4μ^2)), and then, (<ref>)≥ϕ(0)/2√(2π) mξ(w)exp(-2m/1-4μ^2) → 0, as m →∞, providing that μ≤μ_0 is bounded away from 1/2. Now combining the upper bound for each case (either μ≥ξ(e^-u) - 1 or μ≤ξ(e^-u) - 1), we obtain 1 - Υ'(u) ≤ (1+ϵ)(1 + 4ξ^2(w)/1-4ξ^2(w) - 1/K_0). For a sufficiently large m, since ξ(w) → 0, if choosing ϵ^-1 = 4K_0- 2 for an K_0 ≥√(2)/4, then 1 - Υ'(u) ≤ (1+ϵ)(1 - 1/2K_0) = 1- 1/4K_0, which implies Υ'(u) ≥1/4K_0. The proof is thus completed. § ANALYZING Β(U) β(x) = (g/φ)(x) - 1 is non-decreasing on x ∈ [m/2, m] and non-increasing on x∈ [0, m/2). By plugging the expressions of g(x) and φ(x), we obtain d β(x) /du = d(g/φ)(x)/dx = 2^m/m+1·d m x/dx = 2^m d (Γ(x+1) Γ(m-x+1)) / dx/(m+1) Γ(m+1). We now show d β(x) / dx ≥ 0. By calculation, dΓ(x+1) Γ(m-x+1)/dx = Γ(x+1) Γ(m-x+1) [ Γ'(x+1) - Γ'(m-x+1) ], where Γ'(·) is the derivation of Γ(·), The last display is non-negative for x ∈ [m/2, m] because Γ'(x+1) ≥Γ'(m-x+1) for x ∈ [m/2, m], as Γ'(x+1) is a monotone increasing function, Γ'(m - x + 1) is a monotone decreasing function, and Γ'(x+1) = Γ'(m-x+1) if and only if x = m/2. Thus, we verified that d(g/φ)(x)/dx ≥ 0 and hence, β(x) is non-decreasing for x ∈ [m/2, m]. In particular, when x > m/2, β(x) is a strictly increasing function. Since φ(x) is symmetric at m/2, we also have that β(x) is non-increasing on [0, m/2). Define β(u) = (g/φ)(u) - 1, let ξ be the solution of β(m/2 + mξ) = 1/w for w ∈ (0, 1), then there exists a fixed ξ_∘∈ (0, 1/2) such that for any |ξ| ≤ξ_∘, 2mξ^2 ≤log (1 + 1/w) + log( √(2)(1+m)/√(π m(1-4ξ_∘^2))) + 1/12m, 2mξ^2 ≥[log (1+1/w) + log(√(2)(1+m)/√(π m))](1 + 8ξ_∘^2/3(1-4ξ_∘^2)^2)^-1. If ξ = ξ_n and mξ_n^4 → 0, as m→∞, then |ξ_n| ∼√(log(1+1/w) + log(√(2) (m + 1)/√(π m))/2m). By definition, β(m/2 + mξ)) = 1/w implies (g/φ)(m/2 + mξ) = 1 + 1/w, By plugging-in g(x) = (1+m)^-1 and φ(x) = Bin(m, 1/2) and then taking the logarithm of both sides, we obtain -log (1+1/w) = log (1+m) + logm m/2 + mξ - 2log m. Without loss of generality, we assume 0 ≤ξ < 1/2, as the binomial coefficient is symmetric at m/2, by Lemma <ref>, we have √(2) e^-m T(1/2 + ξ, 1/2) + 2log m/√(π m(1- 4 ξ^2))≤m m/2 + mξ≤√(2) e^-m T(1/2 + ξ, 1/2) + 2log m + ω(ξ)/√(π m(1- 4 ξ^2)), where ω(ξ) ≤ (12m)^-1. Apply (2) in Lemma <ref>, we further obtain 2 ξ^2 ≤ T(1/2 + ξ, 1/2) ≤ 2 ξ^2 + 16 ξ^4/3(1- 4ξ^2)^2. We now use the bounds in (<ref>) and (<ref>) to bound (<ref>) and then rearranging terms on both sides. Using that ξ≤ξ_∘, we then arrive at 2mξ^2 ≤log (1+1/w) + log(√(2)(1+m)/√(π m (1-4ξ_∘^2))) + 1/12m. and 2mξ^2(1 + 8ξ^2/3(1-4ξ^2)^2) ≥log (1+1/w) + log(√(2)(1+m)/√(π m)). Using that ξ^2/(1-4ξ^2)^2≤ξ_∘^2/(1-4ξ_∘^2), we obtain the lower bound. If mξ_n^4 = o(1), then (<ref>) implies T(1/2 + ξ_n, 1/2) ∼ 2ξ_n^2. Also 1-4ξ_n^2 ∼ 1. Thus, we have 2mξ_n^2 ∼log(1+1/w) + log( √(2)(1+m)/√(π m)), which implies (<ref>). Let β(u) = (g/φ)(u) - 1 where g(u) = 1/(m+1) and φ(u) = Bin(u; m, 1/2), then √(π m)/√(2)(m+1)(1+(12m)^-1) - 1 ≤β(m/2) ≤√(π m)/√(2)(m+1) - 1. If m →∞, then β(m/2) + 1∼√(π m)/√(2)(m+1). Since φ(m/2) = m m/2 2^-m, by Lemma <ref> and Lemma <ref>, we obtain √(2/(π m))φ(m/2) ≤√(2/(π m))e^ω(0) for ω(s) given in Lemma <ref>. In fact, from the proof of that lemma, we have ω(0) ≤ (12m)^-1, which implies e^ω(0)≤ (1 + (12m)^-1) → 1 as m →∞. We thus obtain the bound for β(m/2) using the bounds for φ(m/2). Because the bounds for the binomial coefficient in Lemma <ref> is sharp, one could also derive the sharp boundary for x when β(x) = 0. The next lemma presents the boundary. Define β(x) = (g/φ)(x) - 1, let ν_n be the solution of β(m/2 + mν_n) = 0, there exists a fixed ν_∘∈ (0, 1/2) such that for |ν_n| < ν_∘, 2mν_n^2 ≤log( √(2)(1+m)/√(π m(1-4ν_∘^2))) + 1/12m, 2mν_n^2 ≥[log(√(2)(1+m)/√(π m))](1 + 8ν_∘^2/3(1-4ν_∘^2)^2)^-1 . In particular, if mν_n^4 → 0 as m →∞, then |ν_n| ∼√(1/2mlog(√(2) (m + 1)/√(π m))). The proof is essentially the same as that Lemma <ref>, expect that one needs to replace -log (1+1/w) in that lemma with 0, ξ_n with ν_n, and ξ_∘ with ν_∘. In Figure <ref>, we plot the relation between (g/ϕ)(x) and 1. As binomial is a discrete distribution, we apply linear interpolation between points. The blue dash line indicates the threshold, which is the intersection between the two functions, and the red solid line represents its approximated value from (<ref>). We made three plots in the figure, representing values from m = 6, 10, and 30, respectively. We observe that ν_n is already close to the threshold when m = 6. Of course, as m increases, the two values become closer. They almost overlap when m = 30. Define β(x) = (g/φ)(x) - 1, let ν_n be the solution of β(m/2 + mν_n) = 0 and ξ_n(w) be the solution of β(m/2+mξ_n) = 1/w, for ζ_n(w) given in (<ref>), if mξ_n^4 → 0 as m →∞, then for any w ∈ (0, 1), ξ_n^2(w) ∼ν_n^2 + ζ_n^2(w). One can easily prove the result by using the asymptotic bounds for ξ_n^2(w) and ν_n^2(w) in Lemmas <ref> and <ref> respectively. Given β(x) = (g/φ)(x) - 1, for a sufficiently large m, for x ∈ [m/2 - mν_n, m/2 + mν_n] for ν_n given in Lemma <ref>, -1 < β(x) ≤ 0. By Lemma <ref>, β(x) is a monotone increasing function for x ∈ [m/2 , m], and by Lemma <ref>, we thus have β(u) ≤ 0. We also have β(m/2) ≤β(x) ≤β(m/2 + mν_n). Since β(x) is symmetric at m/2, the same inequality holds for x = m/2 - mν_n. From Lemma <ref>, √(2/π m)≤ϕ(m/2) ≤√(2/π m)e^ω(ν_n), where the expression for ω(ν_n) is given in Lemma <ref>, and ω(ν_n) → 0 as m →∞. Therefore, 0 < √(π m)/(m+1)√(2) e^- ω(ν_n)≤β(m/2) + 1 ≤√(π m)/(m+1)√(2). The lower bound above implies β(m/2) > -1. § PROOF OF PROPOSITION <REF> Since (θ_0, )= 𝔼_θ_0(θ_0, φ) and (θ_0, )= 𝔼_θ_0(θ_0, φ), where (θ_0, )= ∑_j=1^n 1{θ_0,j = 1/2}_j/1 ∨∑_j=1^n _j, (θ_0, )= ∑_j=1^n 1{θ_0,j≠ 1/2} (1-_j)/1 ∨ s_n, are the false discovery proportion and the false negative proportion respectively and _j := _j(x) is the test function; _j = 0 if θ_0,j = 1/2 and _j = 1 if otherwise for each j ∈{1, …, p}. Therefore, by the definition of ℜ(θ_0, φ) in (<ref>), we have ℜ(θ_0, φ) = 𝔼_0((θ_0, )+ (θ_0, )) = ∫ P_θ_0((θ_0, )+ (θ_0, )≥ q) dq. Let's first prove the following: for any ϵ∈ (0, 1) and s_n > 0, there exists d_n, ϵ such that m d_n,ϵ^2 = 𝐁̅^-1((1+ϵ^-1)s_n/(n-s_n)) - 𝐁̅^-1(ϵ/4) such that sup_∈𝒯sup_θ_0 ∈Θ_0^-[s_n; d_n, ϵ] (P_θ_0 ((θ_0, )+ (θ_0, )≤ 1-ϵ) ) ≤ 3e^-s_nϵ/6, for the set Θ_0^- [s_n; d_n] = {θ∈ℓ_0[s_n]: |θ_0, j| ≤ d_n, |S_θ| = s_0}, for any d_n ≥ 0 (possibly d_n → 0 as n →∞.) To prove (<ref>), we first obtain a lower bound for FDP. Let δ and τ be arbitrary positive numbers and |θ_0,j - 1/2| ≤ d_n, ϵ, then (θ_0, ) = s_n^-1∑_j=1^n 1{θ_0,j = 1/2, |X_j - m/2| > mτ}/ 1 + s_n^-1∑_j=1^n 1{θ_0,j = 1/2, |X_j - m/2| > mτ} ≥ 1 - ( 1/s_n∑_j=1^n 1{θ_0,j = 1/2, |X_j - m/2| > mτ})^-1. Let 𝒜 = {τ: τ≤ d_n, ϵ + δ}, then (θ_0, )≥(θ_0, φ)1_𝒜 and (θ_0, )1_𝒜 ≥ 1 - ( 1/s_n∑_j=1^n {θ_0,j = 1/2, |X_j - m/2| > m(d_n, ϵ + δ) })^-1 ≥ 1 - max{( 1/s_n∑_j=1^n {θ_0,j = 1/2, X_j > m/2 + m(d_n, ϵ + δ) })^-1, ( 1/s_n∑_j=1^n {θ_0,j = 1/2, X_j < m/2 - m(d_n, ϵ + δ) })^-1}. The lower bound for the FNP can be obtained similarly, let's write (θ_0, ) = 1/s_n∑_i=1^n {θ_0,j≠ 1/2}(1-_j) = 1/s_n∑_j=1^n 1{θ_0,j≠ 1/2, -mτ < X_j - m/2 < mτ} = 1/s_n∑_j=1^n 1{θ_0,j≠ 1/2, -m(τ + θ_0,j) < X_j - mθ_0,j - m/2 < m (τ +θ_0,j)} ≥1/s_n∑_j=1^n 1{θ_0,j≠ 1/2, - m(τ - d_n, ϵ) < X_j < m(τ - d_n, ϵ)}, as |θ_0,j - 1/2| < d_n, ϵ by assumption. Consider the event 𝒜^c = {τ: τ > d_n, ϵ + δ}, we have (θ_0, )≥(θ_0, )1_𝒜^c≥1/s_n∑_j=1^n 1{θ_0,j≠ 1/2, |X_j - mθ_0,j| < mδ}. By combining the lower bounds in (<ref>)–(<ref>), one obtains (θ_0,) + (θ_0, ) ≥min{1/s_n∑_j=1^n 1{θ_0,j≠ 1/2, |X_j - mθ_0,j| < mδ}, (1 - 1/s_n∑_j=1^n {θ_0,j = 1/2, X_j > m/2 + m(d_n, ϵ + δ) })^-1, (1 - 1/s_n∑_j=1^n {θ_0,j = 1/2, X_j < m/2 - m(d_n, ϵ + δ) })^-1}. For any ϵ∈ (0, 1), we then have P_θ_0((θ_0, )+ (θ_0, ) ≤ 1- ϵ) ≤ P_θ_0( 1/s_n∑_j: θ_0,j≠ 1/21{|X_j - mθ_0,j| < mδ}≤ 1-ϵ) + 2 P_θ_0( 1/s_n∑_j: θ_0,j =1/21{X_j - m/2 ≥ m(d_n, ϵ + δ)}≤1/ϵ). If choosing ϵ = 2(mδ), (mδ) = P(|X_j - mθ_0,j| ≥ mδ), (<ref>) can be bounded by P_θ_0(1/s_n∑_j: θ_0,j≠ 1/21{ | X_j - m θ_0,j | < m δ}≤ 1-ϵ) = P_θ_0(∑_j: θ_0,j≠ 1/21{ | X_j - m θ_0,j | ≥ m δ}≥ s_nϵ) = P_θ_0(∑_j: θ_0,j≠ 1/2(1{ | X_j - m θ_0,j | ≥ m δ} - (mδ) ) ≥ s_nϵ/2 ). Applying the Bernstein's inequality in Lemma <ref> with V = ∑_j:θ_0,j≠ 1/2Var(1{|X_j - mθ_0,j| > mδ}) ≤ s_n (mδ) = s_nϵ/2, A = s_nϵ/2 and M ≤ 1, (<ref>) can be bounded by exp(- s_n^2 ϵ^2/8(V + s_nϵ/6)) ≤exp(- s_n^2 ϵ^2/4(s_nϵ + s_nϵ/3)) = exp(- 3s_n ϵ/16), One then can bound (<ref>) using a similar strategy: first, subtracting 𝐁̅(md_n, ϵ+mδ) on both sides, we have 2 P_θ_0( ∑_j: θ_0,j = 1/2( 1{X_j - m/2 ≥ m(d_n, ϵ + δ) } - 𝐁̅(md_n, ϵ + mδ) ) ≤s_n/ϵ - (n-s_n) 𝐁̅(md_n, ϵ+mδ) ) . We now choose d_n, ϵ such that (n - s_n) 𝐁̅(md_n, ϵ+mδ) = s_n/ϵ + s_n ( since ϵ = 2(mδ) = 4𝐁̅(mδ) implies δ = 𝐁̅^-1(ϵ/4)/m), we have md_n, ϵ = 𝐁̅^-1(s_n/n-s_n(1+ϵ^-1) ) - 𝐁̅^-1(ϵ/4). Then, the last display is bounded by 2 P_θ_0(∑_j: θ_0,j = 1/2(1{X_j - m/2 ≥ m(a_n + δ) } - 𝐁̅(a_n + δ) ) ≤ - s_n ) ≤ 2 exp(- s_n^2/2(2s_n/ϵ + s_n/3)) = 2 e^- 3s_n ϵ/14, which we used the Bernstein's inequality. By combining the lower bounds for (<ref>) and (<ref>) gives sup_∈𝒯sup_θ_0 ∈Θ_0^-[s_n, d_n, ϵ] P_θ_0 ((θ_0, )+ (θ_0, )≤ 1- ϵ) ≤ e^-3s_nϵ/16 + 2e^-3s_nϵ/14≤ 3e^-s_nϵ/6. We thus verified (<ref>). We now ready to obtain a lower bound for (<ref>). By taking the integral with respect to ϵ≥ 1/t_n for some t_n →∞, we have inf_∈𝒯inf_θ_0 ∈Θ_0^-[s_n, d_n, ϵ] P_θ_0 ((θ_0, )+ (θ_0, )> 1- ϵ) > 1- 3e^-s_nϵ/6. Let q = 1-ϵ and choose b_n such that mb_n = 𝐁̅^-1((t_n + 1) s_n/n-s_n) - 𝐁̅^-1(1/4t_n), we obtain inf_∈𝒯inf_θ_0 ∈Θ_0^-[s_n; b_n]ℜ(θ_0, ) = inf_∈𝒯inf_θ_0 ∈Θ_0^-[s_n; b_n]∫ P_θ_0((θ_0, )+ (θ_0, )> 1-ϵ) d(1-ϵ) ≥inf_∈𝒯inf_θ_0 ∈Θ_0^-[s_n; b_n]∫ (1 - 3e^-s_n ϵ/6) d(1-ϵ) ≥inf_∈𝒯inf_θ_0 ∈Θ_0^-[s_n; b_n]∫_0^1-1/t_n (1 - 3e^-s_n (1-y)/6) dy ≥ 1 - 1/t_n - 18/s_n. One needs to choose t_n →∞ but not too large, as otherwise, (<ref>) will be negative. Using Lemma <ref>, s_n/n → 0; also, mε^4 = log^2(2log(n/s_n))/m ≤ 4(1-v_1)^2log^2n/m → 0 as ε∼√(2log(n/s_n)/m) = √(2(1-v_1)log n/m) and m≫log^2n by assumption, (<ref>) implies b_n ∼√(log(n/s_n - 1) - log (1+t_n)/2m) - √(log 4t_n/2m). Choosing t_n = log (n/s_n), then t_n →∞ and log(t_n) = loglog(n/s_n) = o(log(n/s_n)). Thus 1 - t_n^-1 - 18s_n^-1→ 1 and b_n ∼√(log (n/s_n)/(2m)) for a sufficiently large n/s_n and m. Combining the above results, we obtain lim inf_n →∞inf_∈𝒯sup_θ_0 ∈Θ_0[s_n, a] ((θ_0, ) + (θ_0, )) ≥lim inf_n →∞inf_∈𝒯inf_θ_0 ∈Θ_0^-[s_n; b_n] ((θ_0, ) + (θ_0, )) ≥ 1. for any a < 1, as n, m →∞. § SEVERAL USEFUL BOUNDS FOR THE BINOMIAL DISTRIBUTION Let m m/2 + ms be the binomial coefficient for any s ∈ [0, 1/2) and T(a, p) = alog (a/p) + (1-a) log ((1-a)/(1-p)) for a, p ∈ (0, 1), then m m/2 + ms = √(2) e^-m T(1/2 + s, 1/2) + mlog 2 + ω(s)/√(π m(1- 4 s^2)), where ω(s) = a_1 - a_2(s) - a_3(s), (12m+1)^-1≤ a_1 ≤ (12m)^-1, (6m+12ms + 1)^-1≤ a_2(s) ≤ (6m+12ms)^-1, and (6m-12ms + 1)^-1≤ a_3(s) ≤ (6m-12ms)^-1. In particular, if ms^2 → 0, then logm m/2 + ms∼ - 1/2log(π m/2) + mlog 2. We write the binomial coefficient as follows: m m/2 + ms = m!/(m/2 + ms)!(m/2 - ms)!, (<ref>) is obtained by directly applying the Sterling approximation: for any n ∈ℤ^+: n! = √(2π)exp((n + 1/2)log n - n + a(n) ), where (12n+1)^-1≤ a(n) ≤ (12n)^-1. The result in (<ref>) follows by using Lemma <ref>, T(1/2+s, 1/2) ∼ 2ms^2 when s = o(1), and noting that a_1, a_2(s), a_3(s) → 0 as m→∞. Let X ∼Bin(m, θ) be the binomial distribution with parameters m and θ. Let 𝐁̅_θ(k) = ∑_k' = k^mφ_θ(k' = k) be one minus the cdf function, then if k = ma for 1 > a > θ≥ 1/2 and T(a, θ) = a loga/θ + (1 - a) log1- a/1-θ, then e^-mT(a, θ)/√(2π ma (1-a))≤𝐁̅_θ(ma) ≤a(1-θ) e^-mT(a, θ) + 1/(12m)/(a-θ) √(2π ma (1-a)), Furthermore, 𝐁̅_θ(ma) ≤ e^-mT(a, θ) and if m →∞, then - 1/mlog𝐁̅_θ(ma) ∼ T(a, θ). To prove the lower bound, we use the inequality 𝐁̅_θ(ma) ≥φ_θ(x = ma) ≥m maθ^ma(1-θ)^m - ma. Then, the result for the lower bound follows directly by applying the lower bound of the binomial coefficient given in (<ref>). Since 1/m√(2/π m a(1-a)) is of a smaller order of T(a, θ) as m →∞, we thus obtain the second inequality. To prove the upper bound, we write 𝐁̅_θ(ma) =𝐁̅_θ(ma)/φ(ma)φ(ma) and then apply Lemma <ref> and the upper bound for the binomial coefficient in (<ref>). The second upper bound is the Chernoff bound for the binomial cdf. <cit.> Let X ∼Bin(m, 1/2) and 𝐁̅(k) = P(X ≥ k), if m ≥ 28, define γ(ε) = (1+ε)log (1+ε) + (1-ε)log(1-ε) - ε^2/2ε^4 = ∑_r = 0^∞ε^2r/(2r+3)(2r+4), which is an increasing function. Define ε = 2K-M/M where K = k -1 and M = m-1 and λ_m ∈ [(12m+1)^-1, (12m)^-1], then there exists a constant C > 0 such that 𝐁̅(k) = P(X ≥ k) = Φ̅(ε√(M)) e^A_m(ε), where A_m(ε) = -M ε^4 γ(ε) - log (1-ε^2)/2 - λ_m-k + r_k and -Clog M/M≤ r_k ≤ C/M for all ε corresponding to the range m/2 < k ≤ m-1. <cit.> Let X ∼Bin(m, θ) be the binomial distribution with parameters m and θ, where 0 < θ < 1, m ≥ 1, and mθ≤ k ≤ m. Define z = (k - mθ)/σ, σ = √(mθ(1-θ)), then _θ(k) = σBin(k-1; m-1, θ) Y(z) exp(E_θ(k, m)/σ), where Bin(k-1; m-1, θ) is the binomial distribution at k -1 with parameters m -1 and θ, Y(z) = Φ̅(z)/ϕ(z), and 0 ≤ E_θ(k, m) ≤min{√(π/8), 1/z}. <cit.> Let _θ(k) = ∑_q = k^mφ_θ(q) be one minus the cdf of Bin(m, θ), if k ≤ mθ, then _θ(k) ≥ 1 - Φ(k - mθ/√(mθ)). <cit.> Let φ_θ(k) = Bin(m, θ) and _θ(k) = ∑_q = k^mφ_θ(q), then for any k > mθ, θ∈ (0, 1), and m ≥ 1, k/m≤_θ(k)/φ_θ(k)≤k(1-θ)/k - mθ. Let 𝐁̅(m/2 + mx) = ∑_q = m/2 + mx^mφ(q) be one minus the cdf of Bin(m, 1/2), define M = m-1, K = k -1= m/2+mx-1, and ε = 2K/M - 1, if ε≤ 0.957 and m ≥ 28, then for any y ∈ (0, 1/2), m+1/2 + M/2ϝ_m(y) ≤^-1(y) ≤m+1/2 + M/2ϝ_m(y), where - Clog M/M ≤ r_k ≤ C/M for some fixed constant C, λ_m-k∈ [1/12m+1, 1/12m], ϝ_m(y) = √(( {(2log(1/y) - loglog(1/y) + 2r_k - 2λ_m-k - log(16π)/2M + 1/2)_+}^1/2 - 1/√(2))_+), and ϝ_m(y) = √(2log(1/y) - 2λ_m-k + 2r_k/M-2). In particular, if m →∞ and m ε^4 → 0, then m/2 + √(m/2(log(1/y) - log√(log(1/y)) - log(4√(π)))_+)≤^-1(y) ≤m/2 + √(mlog(1/y)/2); furthermore, if y → 0, then ^-1(y) ∼ m/2 + √(mlog(1/y)/2). Let's first provide the upper and lower bound for the inverse of the standard Gaussian cdf, which is given in Lemma 36 of : denote h = Φ̅(z), the upper tail probability of the standard Gaussian, for any y ∈ (0, 1/2), {(2log(1/h) - loglog(1/h) - log(16π))_+}^1/2≤Φ̅^-1(h) ≤{2log (1/h)}^1/2. Let M = m-1 and ε = 2mx - 1/M, then by (<ref>), we have y = (m/2 + mx) = Φ̅(ε√(M)) e^A_m(ε), where the expression for A_m(ε) is given in Lemma <ref>. Upper bound. Combining (<ref>) with the upper bound in (<ref>), we obtain ε^2M≤ 2log(1/y) + 2A_m(ε). By plugging-in the expression of A_m(ε) leads to 2log(1/y) ≥ε^2M + 2Mε^4γ(ε) + log (1-ε^2) + 2λ_m-k - 2r_k ≥ε^2M - 2ε^2 + 2λ_m-k - 2r_k, as γ(ε) ≥ 0 and log (1-ε^2) > -2ε^2 for ε∈ (0, 1/2). The last display implies ε≤√(2log(1/y) - 2λ_m-k + 2r_k/M-2), thus, since ^-1(y) = m/2 + ε M + 1/2, one obtains ^-1(y) ≤m+1/2 + M/2√(2log(1/y) - 2λ_m-k + 2r_k/M-2). Lower bound. To prove the lower bound part, we use the lower bound in (<ref>), with (<ref>), one obtains ε^2 M ≥ 2log(1/y) + 2A_m(ε) - log(log(1/y) + A_m(ε)) - log(16π), which implies 2log(1/y) ≤ε^2 M - 2A_m(ε) + log (log(1/y) + A_m(ε)) + log (16π) ≤ε^2 M - 2A_m(ε) + loglog(1/y) + log (16π) = ε^2 M + 2Mε^4 γ(ε) + log(1-ε^2) + 2λ_m-k - 2r_k + loglog(1/y) + log (16π) ≤ε^2 M + 2Mε^4 - ε^2 + 2λ_m-k - 2r_k + loglog(1/y) + log (16π) ≤ 2M(ε^2 + 1/√(2))^2 - M + 2λ_m-k - 2r_k + loglog(1/y) + log (16π). We used A_m(ε) ≤ 0 to obtain the second inequality in the last display and log(1-x) ≤ -x as long as 1-x > 0 and γ(ε) ≤ 1 to obtain the third inequality. To prove γ(ε) ≤ 1, we have γ(ε) = ∑_r = 0^∞ε^2r/(2r+3)(2r+4)≤1/12∑_r = 0^∞ε^2r = 1/12(1-ε^2)≤ 1, as ε < √(1- 1/12)≈ 0.957. From (<ref>), we have (ε^2 + 1√(2))^2 ≥1/2M( 2log(1/y) - loglog(1/y) + M + 2r_k - 2λ_m-k - log(16π) ). Taking the square root of both sides and subtracting 1/√(2) in the preceding display, the lower bound for ^-1(y) follows by plugging the lower bound of ε into (m + 1 + ε M)/2 = ^-1(y). To prove the second inequality, for a sufficiently large M, m ≈ M, r_k = o(1), and λ_m-k = o(1), the expression of the upper bound for ^-1(y) then reduces to m/2 + √(mlog(1/y)/2). Since mε^4 = o(1), from (<ref>), one obtains 2log (1/y) ≤ Mε^2 + loglog(1/y) + log (16π) + o(1). Thus, ε^2 ≥ 2log(1/y) - loglog(1/y) - log(16π) for a sufficiently large m. Using ^-1(y) =(m + 1 + ε M)/2 ≈ m/2 + mε/2 for a sufficiently large m leads to the lower bound. When y → 0, then log(1/y) ≫loglog(1/y)/2 + log(4√(π)), which leads to the last inequality. For a ≥ 1/2 and p ≥ 1/2, let T(a, p) = a log( a/p) + (1-a) log(1-a/1-p), and define h_p(ϵ) = T(p + ϵ, p), then (a) h_p(ϵ) is a monotone increasing function on ϵ∈ (0, 1-p) and a monotone decreasing function on ϵ∈ (-p, 0); (b) h_p(ϵ) is continuous and nonnegative; it achieves the global minimum at ϵ = 0. (c) ϵ^3 h_p”'(ϵ)/6 is positive if ϵ∈ [0, 1-p) or ϵ∈ (-p, 1/2-p); it is negative if ϵ∈ (1/2 - p, 0). When p = 1/2, ϵ^3 h_p”'(ϵ)/6 ≥ 0 for any ϵ∈ (-1/2, 1/2). (d) There exists ϵ^⋆ such that ϵ^⋆∈ [0, ϵ] if ϵ≥ 0 or ϵ^⋆∈ [ϵ, 0] if ϵ < 0, h_p(ϵ) = ϵ^2/2p(1-p) + ϵ^3 (2p+2ϵ^⋆ - 1)/6(p+ϵ^⋆)^2(1-p-ϵ^⋆)^2. In particular, we have the following: (1) if ϵ = 0, then h_p(ϵ) = 0; (2) if 0 < ϵ < 1-p, then, ϵ^2/2p(1-p)≤ h_p(ϵ) ≤ϵ^2/2p(1-p) + 8 ϵ^3 (2p + 2ϵ - 1)/3 (1-4(p + ϵ - 1/2)^2)^2; (3) if 1/2- p < ϵ < 0, then ϵ^2/2p(1-p) + 8 ϵ^3 (2p -1)/3 (1-4(p + ϵ - 1/2)^2)^2≤ h_p(ϵ) ≤ϵ^2/2p(1-p); (4) if -p < ϵ < 1/2-p, then ϵ^2/2p(1-p)≤ h_p(ϵ) ≤ϵ^2/2p(1-p) + 8 ϵ^3 (2p - 1)/3 (1-4(p - 1/2)^2)^2; (5) If p = 1/2 and ϵ = o(1), then h_p(ϵ) ∼ 2ϵ^2 for any ϵ∈ (-1/2, 1/2). The following results are useful for our proof: h_p(ϵ) = (p + ϵ) log(p + ϵ/p) + (1 - p - ϵ) log(1 - p- ϵ/1-p), h_p'(ϵ) = log (1 + ϵ p^-1) - log (1 - ϵ(1 - p)^-1), h_p”(ϵ) = (p + ϵ)^-1 + (1-p-ϵ)^-1, h_p”'(ϵ) = 2p + 2ϵ - 1/(p + ϵ)^2(1- p - ϵ)^2. First, let us verify (a)–(c). (a) is easy to verify as h_p'(ϵ) > 0 if ϵ > 0 and h_p'(ϵ) < 0 if ϵ < 0. For (b), the proof of h_p(ϵ) for ϵ∈ (-p, 1-p) is continuous is trivial and thus is omitted. since h”(ϵ) > 0, h_p(ϵ) is a convex function; also, h_p(ϵ) achieves the global minimum at ϵ = 0 and h(0) = 0, so, h_p(ϵ) is nonnegative. Next, we prove (d). By applying the Taylor's theorem up to the third term together with the mean-value theorem, we obtain h_p(ϵ) = ϵ^2/2p(1-p) + ϵ^3 h_p”'(ϵ^⋆)/6, for an ϵ^⋆ between 0 and ϵ. Last, we prove (1)–(5). (1) is trivial. (2) and (3) can be verified by plugging-in the expression for h”'_p(ϵ^⋆) and noting that h”'(ϵ^⋆) > 0 if ϵ∈ (0, 1-p) and ϵ^3 h”'(ϵ^⋆) < 0 if ϵ∈ (1/2 - p, 0) respectively. (4) can be proved in a similar way but noticing that h”'_p(ϵ) < 0 but ϵ h”'_p(ϵ) > 0 if ϵ∈ (-p, 1/2-p). The last result can be verified easily by plugging p = 1/2 into (<ref>) and then using ϵ = o(1), then ϵ^3h”'_p(ϵ^⋆) = o(12ϵ^2) for any ϵ∈ (-1/2, 1/2). Let X ∼Bin(m, θ) and _θ(·) be one minus of its cdf, for ξ := ξ(w) in (<ref>), for any w ∈ (0, 1), if m/2 < m θ≤ m/2 + mξ≤ m and mξ^4 → 0 as m →∞, then _θ(m/2 + mξ) ≥1/2√(1-2(θ - 1/2)/1-2ξ)Φ̅( 2√(m)(ξ - (θ - 1/2))/√(1-4(θ - 1/2)^2). ) Denote μ = θ - 1/2 and let σ = √(m (1-4μ^2))/2, z = 2√(m)(ξ - μ)/√(1-4μ^2), and Y(z) = Φ̅(z) /ϕ(z), then by Lemma <ref>, _θ(m/2 + mξ) ≥σBin(m/2 + mξ - 1; m -1, μ + 1/2) Y(z) = (1/2 + ξ) √(m(1-4μ^2))/2(1/2 + μ)φ_θ(m/2 + mξ)/ϕ(z)Φ̅(z) We need to bound the ratio φ_θ(m/2 + mξ)/ϕ(z). By Lemma <ref>, φ_θ(m/2 + mξ)/ϕ(z)≥2/√(m(1-4ξ^2)) e^-mT(1/2 + ξ, 1/2+μ) + z^2/2. Since μ < ξ, by (3) in Lemma <ref> and the assumption mξ^4 → 0, we obtain φ_θ(m/2 + mξ)/ϕ(z)≥2/√(m(1-4ξ^2)) (1-o(1)). Then (<ref>) can be bounded from below by (1-o(1))(1/2 + ξ) √(1-4μ^2)/(1/2 + μ) √(1-4ξ^2)Φ̅(z) ≥1/2√(1-2μ/1-2ξ)Φ̅(z), for a sufficiently large m. By plugging-in the expression of z, we obtain the result. Let X ∼Bin(m, θ) and _θ(·) be its upper tail probability, for positive a_1, a_2 ≤1/m such that |2ma_1^2 - 2ma_2^2| ≤ 1/4 and 1/2 ≤θ < 1, if m →∞, then there exists a C > 0 depending on θ, a_1, a_2 such that _θ(m/2 + ma_1)/_θ(m/2 + ma_2)≥ C exp(- 2m |a_1^2 - a_2^2|/1-4(θ - 1/2)^2). If a_2 ≥ a_1, the result is trivial. Let's focus on a_1 > a_2. Denote φ_θ(x; m-1) = Bin(x, m-1, θ) and note that φ_θ(x) = φ_θ(x; m). If 0 ≤ a_2 < a_1 ≤θ - 1/2, we have _θ(m/2 + ma_1)/_θ(m/2 + ma_2)≥ 1/2 > 1/4 ≥1/4 e^- 2m |a_1^2 - a_2^2|, as 1/2 < _θ(m/2 + ma_1) < _θ(m/2 + ma_2) < 1. If 0 ≤ a_2 ≤θ-1/2 ≤ a_1, then _θ(m/2 + ma_2) ≥ 1/2. Let's denote μ = θ- 1/2, then by Lemma <ref>, we have _θ(m/2 + ma_1) ≥1/2√(1 -2μ/1-2a_1)Φ̅( 2√(m)(a_1 - μ)/√(1-4μ^2)) ≥1/2√(1 -2μ/1-2a_1)Φ̅( 2√(m)(a_1 - a_2)/√(1-4μ^2)). Using that 2m|a_1^2 - a_2^2| ≤1/4 and √(1 -2μ/1-2a_1)≥1/√(2), by Lemma <ref>, Φ̅( 2√(m)(a_1 - a_2)/√(1-4μ^2)) ≥2√(m)(a_1 - a_2)/√(1-4μ^2)/1 + 4m(a_1 - a_2)^2/1-4μ^2ϕ( 2√(m)(a_1 - a_2)/√(1-4μ^2)) ≥min{1/2 √(2(1-4μ^2)), √(1-4μ^2/2)}1/√(2π)exp(- 2m(a_1^2 - a_2^2)/1-4μ^2) = C_1 exp(- 2m(a_1^2 - a_2^2)/1-4μ^2). Last, if 0 ≤θ-1/2 < a_2 < a_1, by invoking Lemma <ref>, we let σ = √(m(1-4μ^2))/2 and z_i = (ma_i - mμ)/σ for i = 1, 2, then _θ(m/2 + ma_1)/_θ(m/2 + ma_2) = φ_θ(m/2 + ma_1 -1; m-1) Y(z_1)/φ_θ(m/2 + ma_2 -1; m-1) Y(z_2)exp(A_m), where Y(z) = Φ̅(z)/ϕ(z) and A_m = (E_θ(m/2 + ma_1, m) - E_θ(m/2 + ma_2, m))/σ. By Lemma <ref>, we have Φ̅(z_1)/Φ̅(z_2)≥z_1 z_2/1 + z_1^2ϕ(z_1)/ϕ(z_2). By plugging-in expressions of z_1 and z_2, using that z_2^2 > 1, we obtain Φ̅(z_1)/Φ̅(z_2)≥a_2-μ/a_1-μexp(- 2m(a_1^2 - a_2^2)/1-4μ^2). Next, by Lemma <ref> and then (d) in Lemma <ref>, as m→∞, m ≈ m-1 and m/2 - 1 ≈ m/2, then φ_θ(m/2 + ma_1 -1; m-1)/ϕ(z_1)/φ_θ(m/2 + ma_2 -1; m-1)/ϕ(z_2) = √(1 - 4a_2^2/1-4a_1^2) e^- (m - 1) (T(1/2 + a_1, θ) - T(1/2 + a_2, θ)) - z_1^2/2 + z_2^2/2 ≥√(1 - 4a_2^2/1-4a_1^2) e^-m (a_1^2 - a_2^2) K, where T(a, p) = alog(a/p) + (1-a)log((1-a)/(1-p)) and K = min{K(ϵ_1^⋆), K(ϵ_2^⋆)}, K(ϵ_i^⋆) = 8(μ + ϵ_i^⋆)/3 (1/2 + μ + ϵ_i^⋆)^2 (1/2 - μ- ϵ_i^⋆)^2 > 0. Since 2m|a_1^2 - a_2^2| ≤ 1/4 by assumption, e^- m(a_1^2 - a_2^2)K > e^-K/8. Thus, φ_θ(m/2 + ma_1 -1; m-1)/ϕ(z_1)/φ_θ(m/2 + ma_2 -1; m-1)/ϕ(z_2)≥√(1 - 4a_2^2/1-4a_1^2)exp(- K/8). Moreover, as m →∞, E_θ(m/2 + ma_1, m) = 1/2√(m) a_1^2→ 0. By combining the above results, we obtain (<ref>)≥(a_2-μ)√(1-4a_2^2)) e^-K/8/(a_1-μ)(1-4a_1^2)exp(- 2m(a_1^2 - a_2^2)/1-4μ^2) ≥ C_2 exp(- 2m(a_1^2 - a_2^2)/1-4μ^2). The proof is completed by taking C = min{C_1, C_2, 1/4}. § AUXILIARY LEMMAS Consider the event Ω_n = {#{j ∈𝒮_0, |X_j - m/2 | > bm ζ_n }≥ s_n - K_n} for X_j ∼Bin(m, p_j), p_j > 1/2, ζ_n given in (<ref>), s_n = |𝒮_0|, and K_n = o(s_n), if s_n ≪ (1 + 1/w)^-(a-b)^2/24, then P(Ω_n^c) = o(1). By definition, Ω_n^c = {#{j ∈𝒮_0: |X_j - m/2 | > b m ζ_n} < s_n - K_n} = {#{j ∈𝒮_0: |X_j - m/2 | ≤ bm ζ_n} > K_n} Thus P(Ω_n^c) = P(Bin(s_n, h_n) > K_n), where h_n = P(|X_j - m/2| ≤ b mζ_n) = P(|X_j - mp_j + mp_j - m/2| ≤ bm ζ_n) ≤ P(|mp_j - m/2| - |X_j - mp_j| ≤ bmζ_n/2) = P(|X_j - mp_j| > |mp_j - m/2| - b m ζ_n/2) ≤ P(|X_j - mp_j| > (a-b)m ζ_n/2). We used the inequality |a +b| ≥ |a| - |b| to obtain the first inequality in the last display. Since 𝔼(X_j) = mp_j, by applying Chernoff bound P(|x - μ| > ημ) ≤ 2e^-η^2 μ/3 for 0 < η < 1 and choosing η = (a-b) ζ_n/(2p_j), we have h_n ≤ 2 e^-m (a-b)^2 ζ_n^2/(12p_j)≤ 2 e^-m (a-b)^2 ζ_n^2/12. Since ζ_n^2 ∼1/2mlog(1+w^-1), h_n ≤ 2e^-(a-b)^2 log(1+1/w)/24 = 2 (1+1/w)^- (a-b)^2/24 := h̃_n. Note that h̃_n → 0 as m →∞. Since P(A_n^c) = P(Bin(s_n, h_n) > K_n) ≤ P(Bin(s_n, h̃_n) > K_n), we use the Bernstein's inequality (see Lemma <ref>) to control the probability in the upper bound of last display. Denote Z_i ∼Bern(h̃_n), 1 ≤ i ≤ s_n as s_n independent Bernoulli variables, choose A = K_n ≥ 2s_n h̃_n and note that ∑_j ∈𝒮_0Var(Z_i) = s_n h̃_n (1-h̃_n) ≤ s_n h̃_n =V and M = 1, we arrive at P(Bin(s_n, h̃_n) > K_n) = P(∑_i=1^s_n Z_i > K_n) ≤exp(- K_n^2/2 s_n h̃_n + 2K_n/3) ≤exp(-6/5s_nh̃_n), which is o(1) as s_nh̃_n → 0 for a large enough m or n. (Bernstein's inequality) Let W_i, 1 ≤ i ≤ n, be centered independent variables with |W_i| ≤ M and ∑_i=1^n Var(W_i) ≤ V, then for any A ≥ 0, P( ∑_i=1^n W_i ≥ A ) ≤exp( - A^2/2(V + MA/3)), P( ∑_i=1^n W_i ≤ -A ) ≤exp( - A^2/2(V + MA/3)). (KMT approximation theorem <cit.>) Let ϵ_1, …, ϵ_n be i.i.d. random variables with 𝔼(ϵ_1) = 0 and 𝔼(ϵ_1^2) = 1, and 𝔼e^θϵ_1 < ∞ for some θ > 0. For each k, let S_k = ∑_i=1^k ϵ_i. Then for any n, it is possible to construct a version of (S_k)_0 ≤ k ≤ n and a standard Brownian motion (W_k)_0 ≤ k ≤ n on the same probability space such that for all x ≥ 0, P (max_k≤ n|S_k - W_k| ≥ Clog n + x) ≤ K_1 e^- K_2 x, for some positive constants C_1, K_1, K_2 do not depend on n. For any x>0, let ϕ(·) and Φ(·) be the pdf and cdf of the standard normal distribution respectively. Denote Φ̅(·) = 1-Φ(·), then for any x > 0, x ϕ(x)/1+x^2 < Φ̅(x) < ϕ(x)/x, In particular, for any x ≥ 1, Φ̅(x) ≥ϕ(x)/2x and, if x →∞, Φ̅(x) ∼ϕ(x)/x. If x → 0 is small, we also have 1/√(2π) e^-x^2/2 < Φ̅(x) < 1/2 e^-x^2/2. <cit.> Let ϕ(x) and Φ(x) be the pdf and the cdf of the standard normal distribution respectively, define Φ̅(x) = 1-Φ(x), ρ(x) = ϕ(x)/Φ̅(x), r(x) = ρ(x) - x, then for x∈ℝ and δ≥ 0, the relation between Φ̅(x + δ) and Φ̅(x) satisfies the following inequality: (i) e^-δρ(x + δ)≤Φ̅(x + δ)/Φ̅(x) ≤ e^-δρ(x), (ii) e^-δ r(x)≤ e^xδ + δ^2/2Φ̅(x + δ)/Φ̅(x) ≤ e^-δ r(x +δ), (iii) e^-ρ(x) δ - δ^2/2≤Φ̅(x + δ)/Φ̅(x) ≤ e^-x δ - δ^2/2. (Lemma 40 of ) For m ≥ 1 and p_1,…, p_m ∈ (0, 1), consider U = ∑_i=1^m B_i, where B_i ∼Ber(p_i), 1≤ i≤ m, are independent. For any nonnegative variable T independent of U, we have 𝔼( T/T+U1_{T >0}) ≤exp(-𝔼 U) + 12 𝔼T/𝔼 U. § ACKNOWLEDGEMENTS The author would like to thank Ismaël Castillo for introducing the problem and offering valuable suggestions and travel support at the early stage of this paper. chicago
http://arxiv.org/abs/2307.07569v1
20230714182037
Orthologic with Axioms
[ "Simon Guilloud", "Viktor Kuncak" ]
cs.LO
[ "cs.LO", "math.LO" ]
We study the proof theory and algorithms for orthologic, a logical system based on ortholattices, which have shown practical relevance in simplification and normalization of verification conditions. Ortholattices weaken Boolean algebras while having polynomial-time equivalence checking that is sound with respect to Boolean algebra semantics. We generalize ortholattice reasoning and obtain an algorithm for proving a larger class of classically valid formulas. As the key result, we analyze a proof system for orthologic augmented with axioms. An important feature of the system is that it limits the number of formulas in a sequent to at most two, which makes the extension with axioms non-trivial. We show a generalized form of cut elimination for this system, which implies a sub-formula property. From there we derive a cubic-time algorithm for provability from axioms, or equivalently, for validity in finitely presented ortholattices. We further show that propositional resolution of width 5 proves all formulas provable in orthologic with axioms. We show that orthologic system subsumes resolution of width 2 and arbitrarily wide unit resolution and is complete for reasoning about generalizations of propositional Horn clauses. Moving beyond ground axioms, we introduce effectively propositional orthologic, presenting its semantics as well as a sound and complete proof system. Our proof system implies the decidability of effectively propositional orthologic, as well as its fixed-parameter tractability for a bounded maximal number of variables in each axiom. As a special case, we obtain a generalization of Datalog with negation and disjunction. Sparsified Simultaneous Confidence Intervals for High-Dimensional Linear Models Xiaorui Zhu, Yichen Qin, and Peng WangXiaorui Zhu is Assistant Professor in the Department of Business Analytics & Technology Management, Towson University. Yichen Qin is Associate Professor in the Department of Operations, Business Analytics, and Information Systems, University of Cincinnati. Peng Wang is Associate Professor in the Department of Operations, Business Analytics, and Information Systems, University of Cincinnati. University of Cincinnati April 20, 2022 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Our goal is to build efficient building blocks for theorem proving and program verification. coNP-hardness of propositional logic already presents a barrier to large-scale reasoning, such as simplification of large formulas and using intermediate assertions to help software verification. We aim to improve the worst-case efficiency of reasoning while preserving the spirit of a specification language with conjunction, disjunction, and negation. We therefore investigate the concepts of ortholattices and orthologics as a basis of predictable reasoning. Non-distributive generalizations of classical logic, including orthologic, were introduced as quantum logic to describe experiments in quantum mechanics, where it was realized that distributivity fails <cit.>. The term orthologic was used in <cit.> for the logic corresponding to the algebraic class of ortholattices, a generalization of Boolean algebras. In particular, the class of closed subsets of a Hilbert space is an ortholattice, but not a Boolean algebra <cit.>. In theoretical physics, ortholattices are an intermediate step towards the study of orthomodular lattices and modular ortholattices among others, <cit.>. Ortholattices have also found application in modelling of epistemic modal logic <cit.>. Recently, researchers have proposed to use ortholattices as an efficient approximation to classical logic in automated reasoning. This approach has been applied to design kernels of proof assistants <cit.>, as well as in software verification tools <cit.>. These results suggest that ortholattices can be used to simplify large formulas using polynomial-time algorithms, while providing soundness, as well as a clear mental model of the degree of its incompleteness. As an example, an ortholattice algorithm in <cit.> may reduce x z ¬ (u ¬ x) to the normal form x z. This normal form is based on the laws that hold in all ortholattices (equivalently, in the free ortholattice). This makes the technique widely applicable, but it also makes it weak in terms of classical and domain-specific tautologies it can prove. This paper explores making orthologic-based reasoning more precise and more usable, asking the following questions: * Can we formally extend orthologic with non-logical axioms? * Can we find a complete and efficient algorithm for it? * What are classes of formulas in classical logic for which orthologic proofs always exist? * Can orthologic be used effectively beyond propositional logic, for classes of predicate logic? Our approach to these questions is to use a sound and complete proof system for orthologic that we extend to support arbitrary non-logical axioms. Algebraically, our proof system is complete for establishing inequalities in the class of ortholattices specified by a given presentation of a class of ortholattices. From the practical point of view, using axioms to represent part of the input formula gives a sound and strictly stronger approximation of classical logic than using ortholattices without axioms. §.§ Ortholattices Ortholattices are a weaker structure than Boolean algebras, where distributivity does not necessarily hold. <ref> shows their axiomatization. All Boolean algebras are ortholattices; they are precisely those ortholattices that are distributive. <ref> shows two characteristic finite non-distributive ortholattices; keeping these structures in mind may provide intuition for reasoning inside the class of all ortholattices. Orthologic is the logical system that corresponds to ortholattices, analogously to how classical logic corresponds to Boolean algebras (and intuitionistic logic to Heyting algebras). §.§ Example of Using Axioms Using axioms in orthologic inference allows us to prove more classical implications than by encoding the entire problem into one formula, increasing the power of reasoning. To understand why, note that proving validity of an implication L → R in all ortholattices can be phrased as proving L ≤ R in all ortholattices, for all values to which L and R can evaluate in those ortholattices. Such an inequality needs to hold in the ortholattice O_6 in <ref> when L evaluates to, for example, b. On the other hand, using axioms, we can encode an implication problem as follows: prove that, in every ortholattice, if L=1, then also R=1. Because L is restricted to be 1, what remains to prove is a weaker statement, provable for more formulas. The conclusion remains sound with respect to the {0,1} lattice of classical logic, where L=1 is the only non-trivial case to check for inequality. For example, x (¬ x u) ≤ u does not hold in O_6 of <ref> (take x ↦ b, u ↦ a as a counterexample). On the other hand, in any ortholattice, if x (¬ x u) = 1 then u=1. Indeed, consider any ortholattice, and suppose x (¬ x u) = 1. Recall that, in any bounded lattice with 1 as a top element, if p q = 1 then p=1 and q=1 because 1 ≤ p q ≤ p. In our example, we conclude x = 1 and (¬ x u) = 1. Now, substituting x=1 and using ¬ 1 = 0 gives us u=1. Such algebraic reasoning has a counterpart in proof-theoretic derivations. We present in Section <ref> a system for derivation of formulas from axioms. Our system is complete for algebraic reasoning in ortholattices, allowing us to derive u if we allow x (¬ x u) as an axiom. Importantly, proof search in our system remains polynomial time, a result that we establish by showing a generalized notion of cut elimination. The use of axioms (equivalently, ortholattice presentations) cannot emulate all instances of classical propositional logic axioms (indeed, proof search in our system remains in polynomial time instead of coNP). However, the above example hints that we can indeed use axioms to prove a larger set of classically valid problems than by using one monolithic formula in orthologic. Indeed, we show a number of practically important classes of problems for which reasoning in orthologic from axioms is complete, pointing to scenarios where orthologic may find useful applications. §.§ Contributions This paper shows how to use ortholattice reasoning with axioms as a sound polynomial-time deductive approach. We make the following contributions: * We first show that a proof system for orthologic with axioms satisfies a form of the Cut Elimination property, where Cut rules are restricted to eliminate only axioms and can only appear near leaves in the proof. From this, we deduce a subformula property. * We show that, in the presence of axioms, there is an orthologic backward proof search procedure with worst case asymptotic time 𝒪(n^2(|A|+1)), where n is the size of the problem and |A| the total number of axioms. Without axioms, the algorithm is quadratic. * We study how orthologic can help solve some classes of classical problems and show that special case of satisfiability instances (2CNF, Horn clauses, renamed Horn, q-Horn, extended Horn) admit orthologic proofs, i.e. these problems are satisfiable in orthologic if and only if they are satisfiable in classical logic. * We show that orthologic decision problems can be flattened, similarly to the Tseitin transform <cit.>, and we use this to give an upper bound on the proving power of orthologic in terms of the width of classical resolution proofs. * We show how orthologic reasoning can be extended to fragments of predicate logic. We show that such quantified orthologic agrees with classical logic on the semantic of Datalog, and hence that Datalog programs admit proofs, making another possible generalization of Datalog to logic programming with negation and disjunction. § PRELIMINARIES We briefly present key concepts and notation which will be used in the present article. Ortholattices are the algebraic variety whose equational laws are presented in <ref>. As in any lattice, we can define an order relation ≤ by: a ≤ b (a=(a b)) which is also equivalent to (b=(a b)). This yields a partially ordered set (poset) whose corresponding axiomatization is shown in <ref> <cit.>. Note also that for any terms x, y we have x=y if and only if both x≤ y and y≤ x. The relation = defined this way is a congruence relation for ≤, , and and thus becomes equality in the quotient structure. From the point of view of first-order logic with equality, each model of Table <ref> axioms can be extended to a model of Table <ref> by defining the inequality a ≤ b as the truth value of the atomic formula a=(a b). Moreover, we have the converse: each model of Table <ref> axioms induces a quotient structure with respect to the x ≤ y & y ≤ x relation; this structure is a model of Table <ref> axioms. By definition, Boolean algebras are precisely ortholattices that are distributive. <ref> shows ortholattices O_6 and M_4 that are not Boolean algebras. In fact, an ortholattice is a Boolean algebra if and only if it does not contain O_6 nor M_4 as a sub-ortholattice <cit.>. Despite being strictly weaker than laws of Boolean algebra, properties in <ref> and <ref> allow us to prove a number of desirable facts: all laws of bounded lattices (absorption, reordering and de-duplicating conjuncts and disjuncts), and laws relating complement to lattice operators (including the laws needed to transform formulas to negation-normal form). Laws of Boolean algebra that do not necessarily hold include distributivity, modularity, and properties such as (¬ x y) = 1 implying x ≤ y. Similarly to how Boolean Algebra is the algebraic structure corresponding to classical logic, we find it natural that ortholattices form a class of structures that corresponds to a logic, orthologic, for which we study proof-theoretic and algorithmic properties in the following sections. We denote classical logic by and orthologic by . [Terms] 𝒯_ denotes the term algebra over the signature of ortholattices over a fixed countably infinite set of variables, that is, the set of all terms which can be built from variables and (, , 0, 1, ,). Terms are constructed inductively as trees. Leaves are labeled with 0, 1, or variables. Nodes are labeled with logical symbols. Since and are commutative, the children of a node form a set (non-ordered). 𝒯_ = 𝒯_, and is also the set of formulas for both classical logic and orthologic. We typically represent elements of 𝒯_ using lowercase Greek letters. We assume a countably infinite set of propositional variables, usually noted x, y, z, possibly with indices. Note that the laws of both and imply that 0 can always be represented by x x and 1 as x x. To simplify proofs, we thus may omit the cases corresponding to 0 and 1. The word problem for an algebra consists in, given two terms in the language of the algebra, deciding if they are always equal by the laws of the algebra or not. For ortholattices, we can relax the definition to allow inequality queries, as they can be expressed as equivalent equalities. A (finite) presentation for an algebra is a (finite) set of equalities (which we relax to inequalities in ortholattices) {ϕ_1 ≤ψ_1, ..., ϕ_n≤ψ_n }. The uniform word problem for presented ortholattices is the task consisting in, given a presentation A and two terms ϕ and ψ, deciding if ϕ≤ψ follows from the laws of ortholattices and the axioms in A. In the terminology of classical first-order logic, the laws of ortholattice in <ref> are a finite set of universally quantified formulas, 𝒯, and they define a first-order theory. The presentation (axioms) A is a set of quantifier-free formulas, whereas ϕ≤ψ is also a quantifier-free formula, with variables possibly in common with those of A. We can then view the uniform word problem as a special case of the question of semantic consequence in first-order logic: 𝒯∪ A ϕ≤ψ. § COMPLETE PROOF SYSTEM AND CUT ELIMINATION We formulate our proof system for orthologic as a sequent calculus. We represent sequents by decorating the formulas with superscript ^L or ^R, depending on whether they appear on the left or right side. For example, ϕ^L, ϕ^R stands for ϕ⊢ψ in more conventional notation. If ϕ is a formula, we call ϕ^L and ϕ^R annotated formulas. A sequent is a set of at most two annotated formulas. We use Γ and Δ to represent sets that are either empty or contain exactly one annotated formula (|Γ| ≤ 1, |Δ| ≤ 1). <ref> shows our sequent calculus for orthologic, parametrised by a set of axioms A. In the present article, orthologic, or denotes this specific proof system. Without the support for arbitrary axioms, an equivalent system, with a different presentation, was introduced in <cit.>. Note that the axioms we consider in this section are not universally quantified: they refer to arbitrary but fixed propositions. One can think of this proof system as Gentzen's sequent calculus for classical logic <cit.> restricted to ensure the following syntactic invariant: At any given place in a proof, a sequent never has more than two formulas on both sides combined. This restriction on the proof system bears resemblance to the syntactic restriction of intuitionistic sequent calculus, where a sequent can never have more than one formula on the right side, a restriction lifted in the classical logic sequent calculus system. Compared to intuitionistic logic, orthologic allows us to prove ⊢ϕ, ¬ϕ, represented as ϕ^R, (ϕ)^R, using the following steps. Hyp ϕ^L, ϕ^R RightNot ϕ^R, (ϕ)^R On the other hand, orthologic restricts the number of assumptions on the left side of the sequent. This strong restriction will be rewarded by the existence of a polynomial-time proof search procedure. We say that a deduction rule is admissible if any sequent that can be proven with the rule can be proven without. §.§ Ortholattice Semantics for Orthologic We interpret a sequent ϕ^L, ψ^R as ϕ≤ψ in an ortholattice. More generally, we have the following definition. The interpretation of a sequent is given by the following table, where ∅ denotes the empty sequent: [ ; ϕ^L, ψ^R ϕ≤ψ; ϕ^L, ψ^L ϕ≤ψ; ϕ^R, ψ^R ϕ≤ψ; ϕ^L ϕ≤ 0; ϕ^R 1 ≤ϕ; ∅ 1 ≤ 0 ] The intended reading of the table above is a mapping of sequents (which are sets) to ortholattice atomic formulas up to logical equivalence. The set {ϕ^L, ψ^L} can be mapped to either ϕ≤ψ or to ψ≤ϕ, but these are equivalent in an ortholattice (analogously for mapping {ϕ^R, ψ^R}). The interpretation of a deduction rule in <ref> with k premises P_1,…, P_k and a conclusion C, is the universally quantified first-order logic formula P_1 …P_n →C. Given an axiom set A we talk about ortholattice with presentation A by taking the interpretation of all axioms in A. Our proof system can prove every axiom of ortholattices (<ref>) and, conversely. Let A be an arbitrary (possibly infinite) set of axioms. A sequent has a derivation from A using the rules of orthologic (<ref>) iff its interpretation is in the first-order theory of ortholattices (<ref>) with presentation A. Sketch. For every <ref> law of the form P_1 …P_n →C, a matching deduction rule P_1 ... P_n C is easily seen to be admissible. Conversely, for every deduction rule of <ref>, the corresponding law is a consequence (in first order logic) of the axioms of <ref>. For any axiom set A, this makes our system (with the Cut rule) sound and complete for the class of all ortholattices satisfying axioms in A. Note that this interpretation is compatible with the interpretation of sequents in classical logic. We can use the soundness and completeness to obtain simple model-theoretic proofs for orthologic. We can show, for example, that substitution of equivalent formulas is admissible. Let Γ and Δ denote sets with at most one labelled formula each. Let Γ[χ:=ϕ] denote the substitution inside Γ of χ (a placeholder formula symbol) by ψ. The following rule for substitution of equivalent formulas is admissible in orthologic: Γ[χ:=ϕ], Δ[χ:=ϕ] ϕ^L, ψ^R ψ^L, ϕ^R Γ[χ:=ψ], Δ[χ:=ψ] Said otherwise, if both ϕ^L, ψ^R and ψ^L, ϕ^R can be shown then arbitrary occurrences of ϕ in a proven sequent can be replaced by ψ. The argument is semantic. Fix any ortholattice 𝒪 satisfying the axioms. Since both ϕ^L, ψ^R and ψ^L, ϕ^R are provable, it follows that ϕ = ψ in 𝒪. Hence, Γ[χ:=ϕ] = Γ[χ:=ψ] and Δ[χ:=ϕ] = Δ[χ:=ψ]. By completeness, the sequent Γ[χ:=ψ], Δ[χ:=ψ] is provable. §.§ Partial Cut Elimination As a sequent calculus, our system has structural rules, introduction rules for each logical symbol and a Cut rule, but no elimination rule. Consequently, by inspecting all rules, we conclude that Cut is the only rule whose premises contain formulas that are not subformulas of the concluding sequent. In an instance of a Cut rule in <ref>, we call the formula ψ the cut formula. In an instance of a left or Right rule, the newly constructed formula is called the principal formula. In the Weaken rule, if Δ contains a formula, it is also called the principal formula. <cit.> showed that orthologic, without arbitrary non-logical axioms, admits cut elimination. The crucial challenge is that, in contrast to classical or intuitionistic calculus, we cannot simply add additional assumptions to the left-hand side of sequents in orthologic derivations. The reason is the restriction on the number of formulas in sequents. The folowing example illustrates this phenomenon. We saw in <ref> that (x ( x u)) ≤ u is not always valid. In particular, the sequent (x ( x u))^L, u^R is not provable in orthologic without axiom. However, with axiom (x ( x u))^R, the sequent u^R is provable as follows. First, ( x u)^R is provable: Ax (x ( x u))^R ( x u)^L, ( x u)^R L.And (x ( x u))^L, ( x u)^R Cut ( x u)^R Then, ( x u)^R Ax (x ( x u))^R x^L, x^R L.And (x ( x u))^L, x^R Cut x^R L.Not ( x)^L Weaken ( x)^L, u^R u^L, u^R L.Or ( x u)^L, u^R Cut u^R In a classical sequent calculus system, the above derivation using the axiom x (¬ x u) could be transformed into a new derivation where each sequent has an additional assumption x (¬ x u) and where the use of axiom rule is replaced with the use of the Hyp rule. This transformation does not apply to : it would create sequents with more than two formulas, which, by the definition in <ref> cannot appear in proofs. For this reason, the ability to add non-logical axioms is crucial in orthologic. We aim to extend the cut elimination property to proofs containing arbitrary axioms. This will allow us to devise an efficient decision procedure for orthologic with axioms, and, by extension, the word problem for finitely presented ortholattices. Moreover, the proof we present is constructive, in the sense that it shows an algorithmic way to eliminate instances of the Cut rule from a proof. Furthermore, we need not worry about the size of the transformed proof, because our Cut elimination propertly will enable us to derive a subformula property and a bound on the size of the proof of any given formula. However, the system does not allow for complete cut elimination in the presence of axioms, as the following short example shows. Let x_1,x_2,y be distrinct variables and let the sequent (x_1 x_2)^L, y^R be the only axiom. The sequent x_1^L, y^R is then provable: x_1^L, x_1^R x_1^L, (x_1 x_2)^R (x_1 x_2)^L, y^R x_1^L, y^R but it cannot be proven without using the Cut rule. To see why, note that Hyp, LeftAnd, RightAnd, LeftOr, RightOr, LeftNot, RightNot do not yield sequents whose syntactic form can be x_1^L, y^R. Furthermore, Ax does not produce the desired sequent as it is not the axiom. Finally, Weaken does not help because its premise would not be provable: neither x_1^L nor y^R are individually provable from the axiom, as can be seen by a semantic argument: it could be, for example, that both x_1 and y have value 0 or both have value 1 in an ortholattice that satisfies the axiom. Thus, cut rule is in general necessary when reasoning from axioms, and we need to formulate a suitable generalization of the concept of cut elimination. For this purpose, we define the rank of an instance of the cut rule. An instance of the Cut rule has rank 1 if either of its premises is an axiom. It has rank 2 if either of its premises is the conclusion of a rank 1 Cut rule. The following theorem is our main result. It implies that the Cut rule can be eliminated or restricted to only cut with respect to axioms. Part (1) has immediate consequences for the subformula property. Part (2) gives further insight into normalized proofs, further restricts our proof procedure in the next section, and it helps with the inductive argument in the proof of the theorem. If a sequent is provable in the system of <ref> with axioms Ax(a_i^∘,b_i^) a_i^∘, b_i^ for all (a_i^∘,b_i^) ∈ A (a_i and b_i are formulas, ^∘ and ^ are side annotation), then there is a proof of that sequent from the same axioms such that: * All instances of the Cut rule use only formulas among a_1, ... a_n, b_1,..., b_n as cut formulas. * All instances of the Cut rule are rank 1 or 2. If a proof does not satisfy the properties (1) and (2) of the theorem statement, then there is a Cut rule for which the condition in (1) or (2) does not hold. Consider a derivation using such a Cut rule as the last step; when Cut is not bottommost, the properties follow trivially by induction. Let the proof be of the form: Γ, ψ^R ψ^L, Δ Cut Γ, Δ where and are the proof trees whose conclusions are respectively the left and right premises and ψ is the Cut formula. We show that there exists a proof of Γ, Δ that satisfies the two properties of the theorem. We proceed by induction on the length of the proof . Hence, we can assume by induction that and satisfy properties 1 and 2. By induction hypothesis, it suffices to show how to transform the proof of Γ, Δ into one where Cut is used only in subproofs strictly smaller than the proof we started with. We do case analysis on and , showing for each case how to transform the proof in this way. ↪ denotes this transformation. Case 1 Suppose ends (and hence starts) with a Hypothesis rule. Then, Γ = ψ^L and ψ^L, Δ can be reached using only . The case where is a Hypothesis rule is symmetric. Case 2 ( ends with Weaken) Case 2.a Suppose ends with a Weaken rule and ψ^R is not the principal formula (see <ref>). ' ψ^R Weaken Γ, ψ^R ψ^L, Δ Cut Γ, Δ ↪ ' ψ^R ψ^L, Δ Cut Δ Weaken Γ, Δ In the transformed proof, ' is part of , so Cut applies to a smaller subproof and can be transformed to satisfy the properties (1) and (2) by inductive hypothesis. Case 2.b Suppose ends with a Weaken rule and ψ^R is the principal formula. ' Γ Weaken Γ, ψ^R ψ^L, Δ Cut Γ, Δ ↪ ' Γ Weaken Γ, Δ Case 3 ( ends with a Left rule where ψ is not principal) Case 3.a Suppose ends with a LeftAnd rule where Γ = (αβ)^L ' α^L, ψ^R LeftAnd (αβ)^L, ψ^R ψ^L, Δ Cut (αβ)^L, Δ ↪ ' α^L, ψ^R ψ^L, Δ Cut α^L, Δ LeftAnd (αβ)^L, Δ Case 3.b Suppose ends with a LeftOr rule where ϕ = αβ ' α^L, ψ^R ” β^L, ψ^R LeftOr (αβ)^L, ψ^R ψ^L, Δ Cut (αβ)^L, Δ ↪ ' α^L, ψ^R ψ^L, Δ Cut α ^L, Δ ” β^L, ψ^R ψ^L, Δ Cut β^L, Δ LeftOr (αβ)^L, Δ Case 3.c Suppose ends with a LeftNot rule, i.e. Γ = α ' α^R, ψ^R LeftNot (α)^L, ψ^R ψ^L, Δ Cut α^L, Δ ↪ ' α^R, ψ^R ψ^L, Δ Cut α^R, Δ LeftNotin (α)^L, Δ The cases where ends with a Right rule are symmetric. Case 4 If ends with Right rule where ψ^R is not the principal formula, the transformation is symmetric to the Left rule case. Similarly if ends with a Left rule where ψ^L is not the principal formula. Case 5 ( ends with Right rule and with a Left rule, ψ is principal in both) Case 5.a Suppose ends with a RightOr rule, i.e. ψ = (αβ). In this case, has to end with a LeftOr rule, as it is the only Left rule that can produce (αβ)^L ' Γ, α^R RightOr Γ, (αβ)^R ' α^L, Δ ” β^L, Δ LeftOr (αβ)^L, Δ Cut Γ, Δ ↪ ' Γ, α^R ' α^L, Δ Cut Γ, Δ Case 5.b If ends with a RightAnd rule and with a LeftAnd, the transformation is symmetric to case 5.a Case 5.c If ends with a RightNot rule and a LeftNot rule, and ϕ = α is principal in both: ' Γ, α^L RightNot Γ, (α)^R ' α^R, Δ RightNot (α)^L, Δ Cut Γ, Δ ↪ ' Γ, α^L ' α^R, Δ Cut Γ, Δ Case 6 Suppose either or end with an Axiom rule. Then, properties 1 and 2 are immediate. Case 7 Suppose ends with a Cut rule, which by induction we can assume is rank 1 or 2. Case 7.a If is rank 1, the following transformation works, since if ” is an axiom both properties immediately hold for both Cuts, and if ' is an axiom, both properties hold for the last cut and the other cut has smaller size. ' Γ, ϕ^R ” ϕ^L, ψ^R Cut Γ, ψ^R ψ^L, Δ Cut Γ, Δ ↪ ' Γ, ϕ^R ” ϕ^L, ψ^R ψ^L, Δ Cut ϕ^L, Δ Cut Γ, Δ Case 7.b Suppose is rank 2 and it is ' that ends with a rank 1 Cut: _1' Γ, χ^R _2' χ^L, ϕ^R Cut^1 Γ, ϕ^R ” ϕ^L, ψ^R Cut^2 Γ, ψ^R ψ^L, Δ Cut^3 Γ, Δ Either _1' or _2' is an axiom. If _2' is, the same transformation as above works because the last cut again has the axiom ϕ as a cut formula. If _1' is an Axiom, we transform to the following: ↪ _1' Γ, χ^R _2' χ^L, ϕ^R ” ϕ^L, ψ^R Cut^1 χ^L, ψ^R ψ^L, Δ Cut^2 χ^R, Δ Cut^3 Γ, Δ Then, the new Cut^3 is of rank 1 and its cut formula is part of an axiom. Cut^2 is of strictly smaller size than the proof we started from, so by induction its conclusion can be obtained with a proof satisfying properties 1 and 2. Case 7.c Suppose now that ” is a rank 1 Cut. We transform the proof as follows: ' Γ, ϕ^R _1” ϕ^L, χ^R _2” χ^L, ψ^R Cut ϕ^L, ψ^R Cut Γ, ψ^R ψ^L, Δ Cut Γ, Δ ↪ ' Γ, ϕ^R _1” ϕ^L, χ^R Cut^1 Γ, χ^R _2” χ^L, ψ^R ψ^L, Δ Cut^2 χ^L, Δ Cut^3 Γ, Δ Either _1” or _2” is an axiom. In both cases, Cut^3 is of rank 2 and its cut formula is part of an axiom. The proofs ending with Cut^1 and Cut^2 are of strictly smaller size than the original proof, so by induction they can be made to satisfy the desired properties 1 and 2 (one of them is already rank 1). The above cases cover all possibilities for how the premises of the topmost cut in the proof are constructed, concluding the proof. Since sequent calculus for orthologic has no elimination rule, traversing a sequent S backwards in its proof obtained by <ref> we obtain the following subformula property. If a sequent S has a proof in the proof system of <ref> with axioms, then it has such a proof where each formula in each sequent ocurring in the proof is a subformula of S or a subformula of an axiom. Let S be a sequent that has a proof. By <ref>, consider a proof of S with properties (1) and (2). We view the proof as a directed tree with root S, whose nodes are the sequents occurring in the proof. We show that each formula in the node is a subformula of S or of an axiom, by induction on the distance of the tree node to the root S. The property trivially holds for the root S. Suppose now that the property holds for a node S' in the tree; we show that it also holds for its children. We consider all applicable rules in <ref>. Hyp rule has no children, so there is nothing to show. Consider a Cut rule and one of its children, Γ, ψ^R. Then Γ is a subformula because it is a part of S', whereas ψ is a part of the axiom by property (1). The case for the other child ψ, Δ of the Cut rule is analogous. The case of the Left and Right rules are easy: we observe that their premises are sequents whose formulas are subformulas of the conclusion S', so they are a subformula of S' or of axioms, by inductive hypothesis and the transitivity of the “subformula” relation. Finally, if S' is obtained using the axiom rule, then its formulas are subformulas of the axioms by definition. § CUBIC-TIME PROOF SEARCH In this section we show that the subformula property (Corollary <ref>) not only implies a quadratic bound on the size of proofs, but also allows us to define an O(n^3) proof search procedure. Such deterministic polynomial-time search is in contrast to co-NP completeness of validity in classical propositional logic <cit.> and to PSPACE completeness of validity in intuitionistic logic <cit.>. We assume that the set A of axioms is finite throughout this section. Our approach is to eliminate from the proof system in <ref> deduction steps which do not satisfy the conditions of <ref>. We assume that no axiom is a trivial one a ≤ a, as that one does not help prove anything. For a formula, sequent or set of formulas or sequents o, let o denote the number of subformulas in o. This is asymptotically equal to the number of symbols needed to represent o. |S| ≤ 2 in our proof system, because S may contain at most two formulas. Similarly for the set of axioms, A is the number of subformulas in all axioms of A whereas |A| is the number of axioms. There is a function that when given S computes a set S containing at most 4(S+A)^2 intermediate sequents such that if S is provable then it is provable with a proof whose all sequents appear in S. By Corollary <ref>, there is a proof where each formula is a subformula of S or of A. Let S be the set of all sequents built from these formulas. There are at most 2(S+A) of labelled subformulas. A sequent has at most two labelled subformulas, and the number of sequents is bounded by the number of ordered pairs of labelled formulas, which is 4(S+A)^2. Let A be a set of axioms, and let S be a sequent. Then there is a constant C=7 such that if S has a proof in the system of <ref> then there are at most C+4|A| valid instances of rules whose conclusion is S. We call the sequents above S in these steps the possible parents of S. Let S be a sequent of the form Γ, Δ. We state in parentheses the maximal number of valid instances for each of the rules in <ref>: * (1) S can be deduced by an application of the hypothesis rule or from the axiom rule (not both as we assume the axioms are non-trivial) * (2) Γ, Δ can follow using weakening, from either Γ or Δ * (2) Γ, Δ can follow using a Left or a Right rule on Γ in at most two ways, depending on the structure of Γ: * If Γ = (ϕ)^L then S can be deduced from ϕ^R, Δ with LeftNot * If Γ = (ϕψ)^L then S can be deduced in two ways using LeftAnd: from ϕ^R, Δ and from ψ^R, Δ * If Γ = (ϕψ)^L then S can be deduced in exactly one way using LeftOr: from both ϕ^R, Δ and ψ^R, Δ. * Right rules are symmetrical and either Left or Right rules apply to Γ, not both * (2) Γ, Δ can follow using a Left or Right rule on Δ in also at most two ways, entirely analogously to the previous case. * (4|A|) Γ, Δ can be deduced using the Cut rule in 4|A| different ways: the cut formula ϕ can be any formula among the left or right sides of axioms (thanks to <ref> property (1)), for at most 2|A| different formulas, and for each the Cut instance can be either of Γ, ϕ^R ϕ^L, Δ Cut Γ, Δ or Δ, ϕ^R ϕ^L, Γ Cut Γ, Δ We thus obtain the desired bound 1 + 2 + 2 + 2 + 4|A| = 7 + 4|A|. There is a proof search procedure for running in time 𝒪((1+|A|)n^2), where |A| is the number of given axioms and n the total size of the problem. In <ref> we present pseudocode for backward search. Each line from 8 to 16 corresponds to trying a specific deduction rule. For an input sequent S, the proof strategy consists in recursively computing the possible parents of S. By <ref>, proof of a sequent S need only involve at most 4(S+A)^2 intermediate sequents, which is 𝒪(n^2). Moreover, note that to compute the possible parents of a sequent, we need not observe the formulas entirely but only their roots, so computing one possible parent is constant time. Moreover, by <ref>, a sequent can only have at most C+4|A| possible parents, so reducing a sequent to all of its possible parents has complexity 𝒪(1+|A|). The final complexity of the proof search procedure is bounded by generating parents for all possible sequents we can encounter, which is 𝒪((C+|A|)n^2) = 𝒪((1+|A|)n^2). §.§ Merging Axioms for Quadratic Complexity In the particular case where A=∅, <ref> is quadratic, which is also the best known result for the word problem and normalization problem in both ortholattices and lattices <cit.>. In general, it can be beneficial to keep the number of axioms as small as possible. For this purpose, we can combine axioms with the same left-hand side into one. Given two axioms representing a ≤ b_1 and a ≤ b_2 we can merge them into an equivalent one a ≤ b_1 b_2. Indeed, given an axiom sequent {a^L, (b_1 b_2)^R} we can derive {a^L, b_1^R} as follows: Ax a^L, (b_1 b_2)^R b_1^L, b_1^R LeftAnd (b_1 b_2)^L, b_1^R Cut a^L, b_1^R Dually, we can merge axioms with the same right-hand side, a_1 ≤ b and a_2 ≤ b into a_1 a_2 ≤ b. Finally, a ≤¬ b can be rewritten into b ≤¬ a and vice versa. We can repeat this process until all left-hand sides and all right-hand sides of axioms are distinct, and no left side is a complement of a right side (we can even use normal forms for ortholattices to make such checks more general <cit.>). Such axiom pre-processing transformations do not change the set of provable formulas. They can be done in time 𝒪(n^2) and they reduce |A| while not increasing n. Using such transformations can thus improve the cubic bound for certain kinds of axiom sets. As a very special case, if all axioms have the form 1 ≤ b_i and a_i ≤ 0 (corresponding to singleton sequents), we can combine them into a single axiom, obtaining 𝒪(n^2) complexity. There is an 𝒪(n^2) algorithm for checking provability from axiom sets A in which all axioms are singleton sequents. § PROOF STRENGTH OF ORTHOLOGIC WITH AXIOMS The previous section established a cubic time algorithm (<ref>) for deriving all consequences of axioms that hold in orthologic. This generalization is sound for classical logic while still being efficient. A key question then is: how precise is it as an approximation of classical logic? To help answer this question, we present several classes of classical problems that our algorithm solves exactly: it is not only sound for them (as it is for all problems), but also complete: it always finds a proof if, e.g., a SAT solver would find it. Furthermore, we partly characterize our proof system in terms of restricted forms of resolution for propositional logic. We are interested in traditional classes of deduction problems that are solvable by proofs. Formally, we define the deduction problem in orthologic, and, respectively, classical logic. An instance of the deduction problem is characterized by a pair (A, S) where A is a set of axioms and S the goal, all of which are sequents with the interpretation as inequality given by <ref>. The deduction problem in (resp. ) consists in deciding if the goal S can be derived from axioms A in orthologic (resp. classical logic). If this is the case, then the instance is called valid. An instance of the deduction problem is -solvable if and only if it has the same validity in and . A class of instances of the deduction problem is -solvable if and only if all its members are -solvable. As is sound relative to , the following are equivalent formulations of -solvability: * The instance has a proof in if and only if it has a proof in . * The goal of the instance is true in all ortholattice interpretations satisfying the axioms if and only if it is true in all { 0,1} interpretations satisfying the axioms. In particular, if the goal of the instance is the empty sequent (hence, we talk about the consistency of axioms), -solvability of (A,∅) is equivalent to each of the following statements: * The axioms of the instance are either unsatisfiable in or satisfiable in . * The axioms of the instance either have a non-trivial Boolean model, or admit only the trivial one-element structure as a model among all ortholattices. In particular, <ref> gives a polynomial-time decision procedure with respect to classical logic for any class of deduction problems instances that are -solvable. In the sequel, we look at the satisfiability of propositional logic formulas in conjunctive normal form (CNF), which are conjunctions of clauses, as their analysis plays an important role in proof theory of and the practice of SAT solving. Among the simplest and most studied refutationally complete systems for is resolution on clauses, shown in <ref>. §.§ Completeness for 2SAT We start with the simplest example of a CNF, the 2CNF class. A 2CNF formula is a finite set of clauses C_1,...,C_m, where each clause is a disjunction of two literals or a single literal. For example, (x y), ( x z), ( z) is a 2CNF formula. 2SAT if the problem of deciding if a 2CNF formula is satisfiable, i.e. if it has a model in the two-element Boolean algebra. Conversely, the instance is unsatisfiable if and only if the conjunction of the clauses implies falsity. We next show how to encode a 2SAT instance into an deduction problem. The idea is to view a 2SAT instance as a deduction problem (A, ∅) where each axiom in A is a sequent containing at most two labelled variables as formulas. We create an axiom sequent for each clause, where a negative literal ¬ p becomes labelled formula p^L and a positive literal p becomes p^R. For example, { x, y} becomes x^L, y^R. Similarly, {x, y} becomes x^R, y^R, whereas { x } becomes x^R. This encoding is equivalent (in ) to the 2SAT instance, with the interpretation of sequents given in <ref>. Consider the Resolution prof system shown in <ref>. For 2SAT instances, the outcome of resolution can be simulated by orthologic using the Cut rule, which allows us to prove the following. 2SAT is -solvable. Consider a 2CNF and its representation as a set of sequents. The instance is unsatisfiable in if and only if there exist a derivation of the empty clause in Resolution. We proceed by induction on the Resolution derivation to show that if a clause is derived, then there is an -proof of the corresponding sequent. Consider a Resolution step between two clauses of (at most) two elements. The sequent corresponding to its conclusion can be deduced from the sequent corresponding to its premises by a single application of the Cut rule in orthologic. {γ, y } { y, δ} Resolution γ, δ ↪ {Γ, y^R } { y^L, Δ} Cut Γ, Δ Where γ and δ are one arbitrary literals (or no literal) and Γ and Δ represent the corresponding annotated formula (or absence thereof). Weaken and Hypothesis steps are similarly simulated by the eponymous steps in . Conversely, if the empty sequent is derivable in orthologic, then no non-trivial ortholattice satisfy the assumptions, and hence no Boolean algebra. §.§ Orthologic Emulates Unit Resolution For an arbitrary clause N ∪ P, let it be encoded as the sequent (⋀ N)^L, (⋁ P)^R, where N (resp. P) is the set of negative (resp. positive) variables in the clause. A Unit Resolution step is a Resolution step (<ref>) where C=∅ or C'=∅. Interpreted over sequents, the application of a Unit Resolution step on two clauses C_1={ x_i } and C_2=({ x_1,…, x_n}∪ P) corresponds to the deduction rule UnitResolutionR. Dually, for C_1 = {¬ x_i } we obtain UnitResolutionL: (x_i)^R (x_1… x_i-1 x_i x_i+1… x_n)^L, (⋁ P)^R UnitResolutionR (x_1 … x_i-1 x_i+1… x_n)^L, (⋁ P)^R (x_i)^L (⋀ N)^L, (x_1… x_i-1 x_i x_i+1… x_n)^R UnitResolutionL (⋀ N)^L, (x_1… x_i-1 x_i+1… x_n)^R UnitResolutionL and UnitResolutionR are admissible rules (<ref>) for proof system of <ref>. We aim to show that this step is admissible in proofs. Instead of giving a syntactic transformation, we can see that the steps are sound in with a short semantic argument. For UnitResolutionL, consider any ortholattice and assume that the premises of the rules are true. The meaning of x_i^R is 1 ≤ x^R, which implies x_i = 1. This implies (x_1 ... x_i ... x_n) = (x_1… 1 … x_n), so the value of the non-unit premise clause reduces to the truth of the conclusion of the rule. By completeness (<ref>), there exists a proof of the conclusion from the premises. Thus, UnitResolutionR is admissible. The argument for UnitResolutionL is dual. §.§ Completeness for Horn Clauses A Horn clause is a disjunction of literals such that at most one literal is positive. We encode a Horn clause into a sequent as (a_1 ... a_n)^L, b^R where a_i are the negated variables of the clause (if any), and b the positive literal of the clause (if it exists). A Horn instance is a conjunction of Horn clauses, and is unsatisfiable if and only if the empty clause can be deduced from it using Resolution. We encode a Horn instance into a deduction problem by adding an axiom for each Horn clause, and the empty sequent as the goal. Horn instances can be solved using Unit Resolution only <cit.>. By <ref>, is complete for Horn instances. Other classes solvable using only Unit Resolution include q-Horn, extended Horn and renamed Horn instances <cit.>. Horn Clause instances, q-Horn instances, extended Horn instances, and renamed Horn instances are -solvable classes. Note that, despite our use of semantic techniques to show completeness for resolution, the results of <ref> apply, provide polynomial-time guarantees for solving these instances. Renamed Horn instances are an interesting extension of Horn instances. A conjunction of clauses is renamed Horn if and only if there exists a set of variables V such that complementing variables of V in the instance yields a Horn instance. In particular, a clause in a renamed Horn instance can contain multiple positive and negative literals. Unit Resolution is stable under such renaming, meaning that a unit resolution derivation of the empty clause remains a valid unit resolution derivation of the empty clause after renaming. This is the reason that Unit Resolution is also complete for renamed Horn clauses. §.§ Renaming Deduction Problems Motivated by renamed Horn instances, we now consider renaming of general deduction problems. We show that renamings of -solvable instances are -solvable. For an arbitrary variable x, the complement of x is x and the complement of x is x. Two deduction problems I_1 and I_2 are renamings of each other if there exists a set of variables V such that complementing variables of V in the axioms and goal of I_1 yields I_2. Let I be a deduction instance such that I is valid in if and only if it is valid in . Then all renamed versions of I are valid in if and only if they are valid in . In , if I_1 and I_2 are renamed versions of each other by a set of variables V, I_1 and I_2 have the same validity. Indeed, suppose there is an ortholattice 𝒪 and assignement s_1:V→𝒪 that is a counter model of I (meaning it satisfies the axioms but not the goal, so I_1 is invalid). Then define s_2 such that s_2(x)=s_1(x) if x∉V and s_2(x)=s_1(x) if x ∈ V. It is then easy to check that since x = x in , s_2 is a counter model for I_2. Hence if I_1 is invalid then I_2 is invalid. Conversely, if I_2 admits a counter model, then I_1 admits a counter model as well. Similarly in , I_1 and I_2 have the same validity by the same argument, considering assignments V →{0, 1} as counter models. Hence, since I_1 has the same validity in and , so does I_2. We do case analysis on the validity of I in . Suppose that I is invalid in . Then, a renamed version of I' of I is also invalid in . Since is a generalization of CL, I' has to be invalid in as well. Suppose now that I is valid in . Then any renamed versions I' of I is too. Hence, we need to show that I' is valid in . Since by assumption I has the same validity in and , I is valid in . Consider an -proof of I noted 𝒫. For a given renamed instance I' of I by a set of variables V, consider the proof 𝒫' where all occurences of variables in V in 𝒫 are negated. It is easy to see that all instances of Hypothesis, Weaken, Cut and left and right rules in 𝒫' remain correct. However, the axioms used in 𝒫' are not quite axioms of I, as we introduced double negation in front of some literals. For example, consider the instance I which has axioms c^L and c^R. There, the following is a proof of the empty sequent: Ax ( c)^L Ax ( c)^R Cut ∅ The instance I', which is I renamed by { c }, has axioms c^L and c^R. However 𝒫' becomes: Ax ( c)^L Ax ( c)^R Cut ∅ The Cut step is correct, but axioms are not, because there c was complemented instead of negated. However, since orthologic admits double negation elimination, c is equivalent to c in all ortholattices, and hence by completeness the desired proof does exist (in fact, it simply consists in extending all doubly negated leaves of 𝒫' by one instance of RightNot and one of LeftNot). Note that we cannot simply complement the appearances of c (instead of negating them) in 𝒫, as that might not preserve correctness of proof steps. §.§ Tseitin's Transformation for Orthologic Axioms Tseitin's transformation for classical logic transforms a formula with arbitrary alternations of conjunctions and disjunctions into one in Conjunctive Normal Form (CNF) that is equisatisfiable. The transformation works by introducing a linear number of new variables, and runs in near linear time. This justifies the focus of SAT solvers on solving formulas in CNF. The essence of Tseitin's transformation in classical logic is to introduce variables, such as x, that serve as names for subformulas, such as F, with the interpretation of x bound to be equal the interpretation of F. Unfortunately, we cannot express such transformation as a single formula, because knowing that ¬ x F equals 1 inside a formula does not imply x ≤ F. On the other hand, once we make use of the power of axioms, the usual Tseitin's transformation again becomes possible. This shows the importance of adding axioms to orthologic reasoning. [-Tseitin transformation] Given a deduction problem instance with axioms A and goal S (where we assume without loss of generality that all formulas are in negation normal form), pick an arbitrary strict subexpression e of a formula in A or S of the form x y or x y, for some literals x and y. Pick a fresh variable c, introduce the axioms c^L, e^R and e^L, c^R and replace e by c in A and S. Repeat until all formulas have height at most 2. We say that a problem (A,S) is in Tseitin normal form if it is obtained from another problem using this transformation. The transformation does not alter the validity of a deduction problem instance in , since we merely defined aliases for subexpressions. Indeed, having both (c^L, e^R) and (e^L, c^R) as axioms is equivalent to forcing e=c in all models. §.§ Resolution Width for Orthologic Proofs in CNF Consider an proof for a deduction problem to which we applied -Tseitin's transformation. If ∘ and denote arbitrary {_^L, _^R} annotations, the resulting problem will only contain sequents of the form: {a^∘, (b c)^}, {a^∘, (b c)^}, {a^∘, b^}, {a^∘}, and ∅, for some literals a, b and c. Moreover, remember that by the subformula property (<ref>), if the problem admits a proof, then it has a proof that only uses formulas among a, a b and a b (for any literals a and b appearing in the problem). Hence, we can constrain every proof of S to involve only sequents of at most 4 literals. In classical logic, every sequent appearing in the proof would then be equivalent to a conjunction of disjunctions of literals (i.e. a conjunction of clauses). In the simplest case: (w x)^L, (y z)^R ( w x y z) For sequents involving a left disjunction or right conjunction, using greatest lower bound and lowest upper bound properties of and : (w x)^L, (y z)^R ( w y z) ( x y z) (w x)^L, ( y z)^R ( w x y) ( w x z) (w x)^L, (y z)^R ( w y) ( w z) ( x y) ( x z) And similarly with all combinations of ^L and ^R, where, for example, a conjunction with L polarity behaves much like a disjunction with R polarity. We say that such a set of clauses represents the corresponding sequent. Crucially, each of these clauses contains at most 4 literals. We now consider again the Resolution proof system of <ref>. Note that we work with plain resolution and not extended resolution that introduces fresh variables on the fly <cit.>. The width of a resolution proof is the number of literals in the largest clause appearing in the proof. The next theorem characterizes the width of those classical logic resolution proofs that suffice to establish all formulas provable by derivations (and possibly some more). An proof of a problem in -Tseitin normal form can be simulated by <ref> proofs of width 5. Consider an proof that satisfies properties of <ref>. We proceed by induction on the structure of such proof. The cases of non-cut rules are immediate. Namely, Hypothesis and Weakening in are directly simulated by the corresponding steps in Resolution. In LeftNot, the principal formula must be a literal, so that the clause representation of the conclusion is the same as the interpretation of the premise. In LeftOr, similarly, the representation of the conclusion is the conjunction of the representations of the premises. LeftAnd is simulated in a Resolution proof by Weakening. Right- steps are symmetrica to Left-steps. The only non-trivial case is the Cut rule. The cut formula can have different shape * The cut formula is a literal Γ, x^R x^L, Δ Cut Γ, Δ We then have technically 36 different cases to describe depending on whether Γ and Δ are conjunctions, disjunctions or literals and their polarity. We show the two extreme cases, and all other can be deduced by symmetry. If Γ and Δ are left conjunctions, right disjunctions or literals, the Cut rule is simulated with a single Resolution instance: (a b)^L, x^R x^L, (c d)^R Cut Γ, Δ ↪ { a, b, x } { x, c, d} Resolution { a, b, c, d } If Γ and Δ are left disjunctions or right conjunctions, 2 applications of Resolution are necessary to obtain each of the two clauses in the conclusion (4 if both): (a b)^L, x^R x^L, (c d)^R Cut Γ, Δ ↪ { a, x}, { b, x} { x, c }, { x, d} 4 × Resolution { a, c}, { a, d}, { b, c}, { b, d} Where each of the clause in the conclusion can be reached by using Resolution on two of the clauses in the premises. * Consider now the case where the Cut formula is a conjunction (the disjunction case is symmetrical). Γ, (x y)^R (x y)^L, Δ Cut Γ, Δ We again present the two extreme cases. If Γ and Δ are both left conjunctions or right disjunctions: (a b)^L, (x y)^R (x y)^L, (c d)^R Cut (a b)^L, (c d)^R ↪ { a, b, x }, { a, b, y } { c, d, x, y} Resolution on x { a, b, c, d, y} Resolution on y { a, b, c, d } The conclusion can be reached by applying Resolution twice successively, but here the intermediate clause { a, b, c, d, y} reaches width 5. If Γ and Δ are left disjunctions or right conjunctions: (a b)^L, (x y)^R (x y)^L, (c d)^R Cut (a b)^L, (c d)^R ↪ { a, x }, { a, y }, { b, x }, { b, y } { x, y, c}, { x, y, d} 4× Resolution on x { y, a, c }, { y, a, d }, { y, b, c }, { y, b, d } 4× Resolution on y { a, c }, { a, d }, { b, c }, { b, d } the Cut can be simulated by first resolving 4 times on x and then 4 times on y. Hence, Resolution of width 5 can simulate all proofs. § EFFECTIVELY PROPOSITIONAL ORTHOLOGIC So far we have studied propositional orthologic. In this section we introduce decidable classes of predicate orthologic. Our inspiration is the Bernays–Schönfinkel-Ramsey (BSR) class of classical first-order logic formulas <cit.>, which consists of formulas of first order logic that contain predicates and term variables but no function symbols, and whose prenex normal form is of the form ∃ x_1,...x_n. ∀ y_1,...,y_n. ϕ where ϕ is quantifier free. Syntactically, a formula in the BSR class can be represented as a quantifier-free formula with constants and variables symbols, where the variables are implicitly universally quantified. It is also called Effectively Propositional Logic <cit.>, because deciding the validity of such formula can be reduced to deciding the validity of a formula in propositional logic by a grounding process. This is possible because formulas in the BSR class have finite Herbrand universe <cit.>. BSR class (with its multi-sorted logic generalization) has found applications in verification <cit.>. We show that orthologic also admits a similar grounding process, allowing us to define the class of effectively propositional orthologic. The class is EXPTIME in the worst case, in contrast to co-NEXPTIME for BSR. Moreover, we show that it becomes polynomial if we restrict the maximal number of variables in axioms, which is in contrast to the corresponding restriction yielding an NP-hard class for classical logic <cit.>. In this section, variables denote variable symbols at the term level, and not propositional variables. We fix two disjoint countably infinite sets of symbols: constants C, and variables V. A predicate signature Σ specifies a finite set of predicate symbols {p_1,…, p_n} with their non-negative arities s_i, and a finite non-empty set of constants C = {c_1,…,c_n}. Define the set of atomic formulas over Σ as P_Σ = ⋃_i=1^n { p_i(x⃗) |x⃗∈ (V ∪C)^s_i} An EPR formula (over Σ) is a formula constructed from P_Σ using ,,¬, corresponding to P_Σ. An annotated EPR formula is a^L or a^R where a is an EPR formula. An EPR sequent is a set of at most two annotated EPR formulas. The degree of a formula or sequent is the number of distinct free variables in it. The degree of a finite set A of sequents, d(A), is the maximum of degrees of its sequents. An atomic formula, formula, or a sequent is ground if it contains no variables (only constants), that is, it has degree zero. An EPR deduction instance is a set (A,S) where A (the axioms) is a set of EPR sequents and S (the goal) is an EPR sequent. An instance of an atomic formula (formula, sequent) is a formula obtained by replacing all occurrences of some variables by other variables or constants. The expansion of an atomic formula s (formula, sequent), denoted s^*, is the set of all of its instances. s^* is countable, and infinite if s contains at least one variable. If A is a set of sequents then A^* = ⋃{ s^* | s ∈ A }. For an EPR deduction instance (A,S) its expansion is (A^*,S). is the problem, given a signature Σ and EPR deduction instance (A,S) over Σ, decide whether its expansion (A^*,S) is a valid deduction instance (in the sense of <ref>). We first show as an intermediate lemma that we only need to look at instances of A using variables appearing in S and constants in S and A. Suppose S has a proof 𝒫 involving axioms in A^*. Then it has a proof containing only variables that appear in S. If a variable z appears somewhere in 𝒫 but not in S, then it has to be eliminated by a Cut rule at some point. Γ, ϕ(z)^R ϕ(z)^L, Δ Cut Γ, Δ Let c ∈C be any constant symbol of Σ. Let [z:=c] be the proof with every instance of z replaced by c. For any given axiom a ∈ A^*, a[z:=c] is also an axiom of A^*, so that all axioms steps occurring in [z:=c] are correct. It is easy to see that all non-axioms steps in [z:=c] remain correct under the substitution. Because by assumption the Cut rules eliminates z from the formula, Γ does not contain z and hence the conclusion of [z:=c] is exactly Γ, ϕ(c)^R. The same can be done to , so that we obtain a proof where z does not appear: [z:=c] Γ, ϕ(c)^R [z:=c] ϕ(c)^L, Δ Cut Γ, Δ The problem (A,S) of size n and degree d(A) is solvable in PTIME(n^d(A)). We will reduce to propositional deduction problems, which <ref> can then solve. By completeness of (<ref>), whether S holds in all ortholattices where A holds is equivalent to the following question: Does there exist a finite subset A' of A^* such that S has an orthologic proof with axioms among A'? <ref> implies that for any sequent S, we only need to consider a finite number of axioms, mainly the axioms involving variables in S and constants in A and S (minimum one). Each axiom a has at most (|S|+||A||)^d(a) such instances, so the total number of axiom we need to consider is 𝒪(|A|·(|S|+||A||)^d(A)) = 𝒪(n^d(A)+1). Combining this result with <ref> gives PTIME(n^d(A)), or, more precisely, 𝒪(n^3(d(A)+1)). §.§ Instantiation as a Rule Instead of starting by grounding all axioms, we can delay instantiation until later in the proof, yielding shorter proofs in some cases. Formally, we add an instantiation step to the proof calculus of <ref> over P: Γ, Δ Inst Γ[x⃗:=t⃗], Δ[x⃗:=t⃗] Holding for arbitrary sets of term variable x⃗ and terms t. We note the resulting proof system ^I. For a sequent S and set of axioms A over quantifier-free predicate logic, S has has an proof from A^* if and only if S has an ^I proof from A. →: Given an proof with axioms in A^*, said axioms can be obtained from A in ^I by an application of the instantiation rule: Ax Γ^*, Δ^* ... S ↪ Ax Γ, Δ Inst Γ^*, Δ^* ... S Where (Γ, Δ) ∈ A and (Γ^*, Δ^*) ∈ A^*. ←: Note that if the Inst rule are only uses right after axioms, then we can reverse the transformation above. We show that given a proof in ^I using Inst, the instances of Inst can be swapped with other rules and be pushed to axioms. For example ' α^L, Δ ” β^L, Δ LeftOr (αβ)^L, Δ Inst (α^* β^*)^L, Δ^* ↪ ' α^L, Δ Inst (α^*)^L, Δ^* ” β^L, Δ Inst (β^*)^L, Δ^* LeftOr (α^* β^*)^L, Δ^* The cases for all other rules are similar. Then, any conclusion of an Inst rule is a member of ^* and can be replaced by an Axiom rule to obtain an proof from A^*. §.§ Proof Search with Unification While searching for a proof, we usually want to delay decision-making (such as which variable to instantiate) for as long as possible. In backward proof search, this means we want to delay it until the sequent is an axiom of A^*. In forward proof search, however, being able to use the Inst rule allows delaying instantiation as much as possible, as in resolution for first-order logic classical <cit.>, <cit.>. We thus adopt unification to decide when and how to instantiate a variable, whenever we use a rule with two premises. The corresponding directed rules are shown in <ref>. Without function symbols, the most general unifier of ϕ and ψ is the substitution θ of smallest support such that θ(ϕ) = θ(ψ) <cit.>. A sequent S over P has a proof in ^I if and only if there exists a sequent S' such that S is a particular instantiation of S' and S' has a proof where the Inst rule is only used in the specific cases of <ref> or to rename variables. (Sketch). The proof is once again by induction and case analysis on the proof 𝒫 of S, except we move the instantiation step toward the conclusion of the proof. Any instance of the Inst rule in 𝒫 can be swaped with the next rule, unless it is a Cut, LeftOr or RightAnd rule. Consider the proof rule that follows the instantiation: * Hypothesis is a leaf rule, so it can never follow a step. * Weaken is immediate, as long as the variables in Δ are properly renamed Γ Inst σ(Γ) Weaken σ(Γ), Δ ↪ Γ Weaken σ(Γ), π(Δ) Inst σ(Γ), Δ Where π is a renaming of variables in Δ to names that are fresh. In particular, it is invertible. * LeftNot and RightNot are immediate. * For RightOr and LeftAnd, the transformation is the same as Weaken. * In the Cut case, assume that the two premises (Γ, ϕ^R) and (ψ^L, Δ) have no shared variables, by renaming them if necessary: Γ, ϕ^R Inst σ_1(Γ), σ_1(ϕ)^R ψ^L, Δ Inst σ_2(ψ)^L, σ_2(Δ) Cut σ_1(Γ), σ_2(Δ) ↪ Γ, ϕ^R Inst θ(Γ), θ(ϕ)^R ψ^L, Δ Inst θ(ψ)^L, θ(Δ) Cut θ(Γ), θ(Δ) Inst σ_1(ψ)^L, σ_2(Δ) Note that since θ is the most general unifier for ϕ and ψ, and σ_1(ϕ) = σ_2(ψ), θ factors in both σ_1 and σ_2, so that the last Inst step is correct. * LeftOr and RightAnd are similar to the Cut step. §.§ Solving and Extending Datalog Programs with Orthologic Datalog is a logical and declarative programming language admitting formulas in a further restriction of the BSR class where ϕ is forced to be a Horn clause (over predicates). A Datalog program is then a conjunction of such formulas (or clauses) <cit.>. While the validity problem for the BSR class is coNEXPTIME complete <cit.>, solving a Datalog program is only EXPTIME-complete. This makes Datalog a suitable language for logic programming and database queries. Typically, a Datalog query asks if a certain fact (an atom without variable) is a consequence of the clauses in the program. This naturally corresponds to solving a deduction problem, with the axioms corresponding to the program and the goal to the query. Datalog program can be evaluated using orthologic. Datalog programs and queries form a subset of the BSR class. <ref> shows that such problems can be reduced via grounding to purely propositional . Moreover, as the resulting set of axioms contains only Horn clauses, <ref> implies that the Datalog program has the same semantic in and in . This means that for any Datalog program, the semantic agrees with the classical semantic. <ref> hence provides a decision procedure for Datalog with complexity PTIME(n^d(A)) (as <ref> shows). This matches known complexity classes of Datalog, which has exponential query complexity (corresponding here to axioms) and polynomial data complexity (corresponding to the goal and axioms with no variables, or facts) <cit.>. §.§ Axiomatizing Congruence and Equality Relations We next show that, when equality is axiomatized in effectively propositional orthologic, a substitution rule becomes admissible. Let X be a countably infinite set of term variables whose elements are noted x,y,z,.... Let be given a presentation with predicate symbols p_1,..., p_n each of arity s_i and ∼ a predicate symbol representing a congruence relation on X. Consider the set of axiom A_∼ containing the following sequents that axiomatize the equivalence property: (x∼ x)^R (x∼ y)^L, (y∼ x)^R (x∼ y y∼ z)^L, (y∼ x)^R and for each symbol p_i and each 1 ≤ j≤ s_i, the congruence property for p_i: (x∼ y p_i(z_1,...,z_j-1, x, z_j+1,...,z_s_i))^L, p_i(z_1,...,z_j-1, y, z_j+1,...,z_s_i)^R Again, this does not constitute a finite presentation of an ortholattice, as there are infinitely many possible instances of axioms. However, by <ref>, if a sequent S over X_∼ has a proof involving axioms in A_∼, then it has a proof with only variables that appear in S. Moreover, the degree of A_∼ is d(A_∼) = max(3, max_i(s_i)+1) axioms, so that the complexity of the proof search is exponential in the arity of the predicates in the language and polynomial in the size of the problem, for a fixed language. The following lemma shows that in any decision problem whose axioms contain A, we can add a substitution rule for equality. Fix a set of predicate symbols and constants. Let A be a set of axiom such that _∼⊂ A. The following rule for substitution of equal terms is admissible in with axioms in A: Γ[x:=s], Δ[x:=s] s∼ t Γ[x:=t], Δ[x:=t] Suppose x occurs only once in Γ, Δ (if it appears multiple time, we repeat the argument). Suppose without loss of generality that this unique occurrence is in Γ. Let a(x) ≡ p_i(u_1,...,x,...u_s_i) be the atomic formula containing this occurrence of x, i.e. Γ = Γ'[χ:=a(x)], for a propositional variable χ. Axioms in A_∼ allow the following proof, where we first derive a(s)^L, a(t)^R: s∼ t^R Weak. a(s)^L, s∼ t^R RightAnd Hyp. a(s)^L, a(s)^R R.And a(s)^L, (s∼ t a(s))^R Ax. (s∼ t a(s))^L, a(t)^R Cut a(s)^L, a(t)^R and then conclude, using an analogous derivation of a(t)^L, a(s)^R Γ[x:=s], Δ Γ'[χ:= a(s)], Δ a(s)^L, a(t)^R s∼ t^R Ax. s∼ t^L, t∼ s^R Cut t∼ s^R …(analogous)… a(t)^L, a(s)^R Subst Γ'[χ:= a(t)], Δ Γ[x := t], Δ The dashed lines denote syntactic equality. Note that we have shown the Subst rule for propositions to be admissible in orthologic in <ref>. Hence, Subst_∼ is admissible in orthologic with any axiomatization containing A_∼. TODO: Please summarize as a theorem: what complexity do we get for labelledDatalog with congruence relations? In fact, in the absence of function symbols, we can just say that this suffices to axiomatize equality? Can we also use it to justify proof rule that substitutes equivalent things with equivalent things? The models would suggest yes, but in the absence of completeness we don't know. If not here, the substitution rule should be shown to hold for propositional case in the previous section, where we are certain that it holds because the proof system is sound and complete. § FURTHER RELATED WORK <cit.> first solved the word problem for free ortholattices in 1976 using algebraic techniques extending the work of <cit.>, who first solved the word problem for free lattices. The observation that we can obtain a proof system for Orthologic by restricting Gentzen's sequent calculus to sequents with at most two formulas was already made by <cit.>. They also showed that the system without axioms admits Cut Elimination. <cit.> used the same system to describe a backward proof search procedure exponential both in time and proof size, and a polynomial (Ω(n^7)) forward procedure. We improved this result to a 𝒪(n^2) (time and size) backward proof search procedure. Other proof systems have been considered. <cit.> used a different set of inference rule to show that the word problem for finitely presented ortholattices is decidable in polynomial time. Their solution involves exhaustive forward deduction, and they give no precise exponent of the polynomial. <cit.> introduce two other sequent-based proof systems for orthologic using concepts from linear logic, and in particular focusing, to constraint proof search. Their procedure is forward-driven. While no complexity analysis is provided, their algorithm is clearly polynomial, and experimental benchmark shows improvement over the algorithm of <cit.>. <cit.> use ortholattices in the context of software verification as an approximation of Boolean algebra. They present an algorithm able to normalize any formula into an equivalent one (by ortholattices laws) of smallest size, with the goal of improving caching efficiency and reducing formula size for SMT solving. Their approach does not involve a proof system and does not support axioms in general. Researchers have explored extensions of Datalog with negations and disjunction, with various computable semantics. Most of these models do not correspond to classical models of predicate logic. Datalog is declarative, but most of its extensions rely on procedural semantics. In a line of work starting with <cit.>, negation has been introduced with a failure meaning, where if a cannot be verified, then a is taken to hold. <cit.> introduces a formal (but not decidable) semantic for negation as failure. <cit.> introduces the notion of stratified programs. This consists in specifying layers of clauses evaluated increasingly, so that if a is not shown in a level, a is assumed in the next, which differs from orthologic semantics. <cit.> introduce a notion of classical negation but the logic is not classical, as a → b and b → a are not equivalent in the proposed semantics. The use of Datalog in program analysis inspired researchers to define Datalog with lattice semantics <cit.>, which explicitly incorporates the concept of fixed point of particular lattices into the language semantics. Our approach is instead to view Datalog in the broader context of validity in all ortholattices, with orthologic as a convervative approximation of validity for classical logic that is always sound and, in several cases we identified, complete. § CONCLUSIONS We have studied algorithmic and proof-theoretic properties of orthologic, a sound generalization of classical logic based on ortholattices. We have shown a form of generalized cut elimination for propositional orthologic in the presence of axiom, implying a subformula property. We have used this result to design a cubic-time proof search procedure for orthologic with axioms (quadratic with bounded cardinality of axiom sets). Furthermore, we have shown that some classes of classical decision problems including 2CNF, propositional Horn clause generalizations and Datalog always admit proofs. This provides sound and complete polynomial-time reasoning for a number of theorem proving tasks in classical logic. We anticipate applications of orthologic with axioms in predictable proof automation inside proof checkers, program verifiers, and expressive type systems. ACM-Reference-Format
http://arxiv.org/abs/2307.06266v1
20230712160815
Towards a privacy-preserving distributed cloud service for preprocessing very large medical images
[ "Yuandou Wang", "Neel Kanwal", "Kjersti Engan", "Chunming Rong", "Zhiming Zhao" ]
cs.CE
[ "cs.CE", "cs.DC" ]
Towards a privacy-preserving distributed cloud service for preprocessing very large medical images Yuandou Wang^1, Neel Kanwal^2, Kjersti Engan^2, Chunming Rong^2, Zhiming Zhao^1,3 ^1Multiscale Networked Systems, University of Amsterdam, the Netherlands ^2Department of Electrical Engineering and Computer Science, University of Stavanger, Norway ^3LifeWatch ERIC Virtual Lab and Innovation Center (VLIC), Amsterdam, the Netherlands Email: {y.wang8, z.zhao}@uva.nl; {neel.kanwal, kjersti.engan, chunming.rong}@uis.no Extended Abstract =========================================================================================================================================================================================================================================================================================================================================================================================================================================== Digitized histopathology glass slides, known as Whole Slide Images (WSIs), are often several gigapixels large and contain sensitive metadata information, which makes distributed processing unfeasible. Moreover, artifacts in WSIs may result in unreliable predictions when directly applied by Deep Learning (DL) algorithms. Therefore, preprocessing WSIs is beneficial, e.g., eliminating privacy-sensitive information, splitting a gigapixel medical image into tiles, and removing the diagnostically irrelevant areas. This work proposes a cloud service to parallelize the preprocessing pipeline for large medical images. The data and model parallelization will not only boost the end-to-end processing efficiency for histological tasks but also secure the reconstruction of WSI by randomly distributing tiles across processing nodes. Furthermore, the initial steps of the pipeline will be integrated into the Jupyter-based Virtual Research Environment (VRE) to enable image owners to configure and automate the execution process based on resource allocation. Computational Pathology, Cloud Computing, Privacy-preserving, Image Preprocessing, Virtual Research Environment, Infrastructure Planning § INTRODUCTION Deep Learning (DL) approaches have advanced and innovated automatic diagnostics, such as quantifying the presence of cancerous cells in digitized histopathology glass slides, called Whole Slide Images (WSIs). However, running these diagnostic services over a large scale requires a significant infrastructure capacity for storing and processing images using complex DL models, e.g., cloud computing and High-Performance computing (HPC), are often needed. The owners of the medical images, e.g., hospitals, often do not have such an infrastructure and have to rely on collaborators with remote infrastructure resources. Establishing a DL-based pipeline for medical images on a remote infrastructure is challenging; for instance, 1) WSIs often contain privacy-sensitive information in their metadata and cannot be directly sent out to the public cloud from the hospitals; 2) WSIs are often very large and require high network bandwidth to upload; and 3) WSIs are split into tiles to process <cit.> and require specialized hardware, e.g., GPUs, to run complex DL models. Furthermore, it is often complicated to deploy an end-to-end pipeline and create an efficient re-configurable workflow<cit.> on remote infrastructure. In this paper, we will tackle these challenges by proposing a cloud-based service that will be integrated into a collaborative virtual research environment based on the works <cit.>, and we will present a use case of a medical image preprocessing application from digital pathology domain to testify our methodology. § CASE STUDY Digital pathology overcomes the hurdles of traditional histopathology by facilitating the diagnostic process using a WSI <cit.>. The preparation of histological glass slides may result in the appearance of artifacts on the obtained WSI due to improper handling of the tissue specimen during the tissue processing stages. These histological artifacts are diagnostically irrelevant and are usually ignored by pathologists in the diagnosis process <cit.>. Therefore, it is vital to detect and remove them before applying diagnostic or prognostic algorithms. Some frequently appearing artifacts are damaged tissue, folded tissue, blur, air bubbles, and diagnostically irrelevant blood <cit.>. Computational pathology (CPATH) researchers may run DL-based artifact preprocessing algorithms over thousands of WSIs before applying diagnostic algorithms, requiring powerful computational resources to process WSIs efficiently. Fig. <ref> presents an overview of such artifact preprocessing pipeline, which is an ensemble of five DL models for blood, blur, damaged tissue, folded tissue, and air bubbles detection tasks from WSI in a binary fashion. Traditionally, the artifact preprocessing pipeline runs over a single machine, bringing the disadvantages of a single security breach or system failure. Besides, handling gigapixel WSIs is time-consuming and resource-intensive, which raises the demand for parallel distributed computing. Nevertheless, WSIs processed on private clouds in research environments are de-identified or pseudonymized under various regulations. This raises concerns about the embedded privacy-sensitive metadata while making distributed processing over public clouds. § METHODOLOGY We introduce a methodology to cope with the highlighted challenges, as shown in Fig. <ref>. It consists of five main steps: preparation of data and workflow, resource allocation, deployment and execution, load balance, and data aggregation. §.§ Data and workflow preparation Step 182 aims to introduce parallelism and encryption available for the next steps. To guarantee privacy-preserving requirements, we remove the metadata from WSI before splitting the gigapixel image into many image tiles to introduce data parallelism. Meanwhile, containerizing the computational tasks as several reusable fine-grained services can improve scalability and security since they are isolated from each other and from the host system. Besides, we apply a matrix A_x to record the distribution of the tiles over a grid which can be encoded to hide its coordinates and divided into sub-matrices A_e,1, A_e,2, ..., A_e,K. Each sub-matrix is considered as an index of distributed dataset D_e,k for each service-based task. §.§ Resource allocation Based on the prepared data and service-based tasks, step 183 is to map available resources (e.g., clusters at universities and commercial clouds) to various tasks in a manner that optimizes their utilization and satisfies requirements <cit.>. Related methods, such as IC-PCP <cit.> and machine learning-based approach <cit.>, can be improved for workflow scheduling. For bi-objective optimization, such as reducing execution time and cost, there are trade-offs with time performance and monetary cost over the cloud. On this basis, this paper looks into workflow scheduling problems under the influence of privacy requirements and the split data sets, so the research problem is more challenging. §.§ Deployment and execution After a deployment plan is created at step 183, the datasets and service-based tasks will be assigned to planned infrastructures equipped with computation, communication, and storage resources. The system should ensure that data storage and task execution remain in place and continue to be effective even among changes (such as downtime, errors, or attackers) to the system or emerging threats, according to step 184. Due to distributed processing, it reduces the burden of a single machine and avoids a single security breach. §.§ Load Balance Considering that computing nodes may unpredictably slow down or fail during their execution, step 185 aims to improve the performance, reliability, and load balance of task-based applications <cit.>. This approach asymptotically achieves near-ideal load balancing and computation cost in the presence of slow nodes (stragglers), which could also be complementary to workflow scheduling. §.§ Data aggregation Step 186 takes the predicted distributed output, b_o,k for each encoded tile, from step 184 for every service task, to reconstruct the encoded distributions as A_d,k. Privacy preservation can be guaranteed by the random-value perturbation technique <cit.>. This approach has solid theoretical foundations and is easier to apply for the reconstruction of the encrypted data than others (e.g., differential privacy and secure multiparty computation <cit.>) especially considering a data matrix manipulation. Then by tracing back coordinates, we can create a segmentation mask for detected artifacts. It presents the results of the DL-based artifacts detection in a summary view, incl. visualization, evaluation, and metrics. § SYSTEM MODEL We sketch out a privacy-preserving distributed processing pipeline for the medical application, shown in Fig. <ref>. It is composed of three main steps - viz Splitting, Computation, and Aggregation. Both Splitting (See in step 182) and Aggregation (See in step 186) will be executed on a Trusted Server. Such distributed data processing application can be defined as a tuple: 𝒜=(ℳ, ε, 𝒟, ℛ,ℐ, req) where ℳ denotes a set of lightweight interconnected microservices. A source microservice m_src processing the data stream produced by the source dataset 𝒟. A sink microservice m_snk representing its final results ℛ. ε indicates a set of data streams d_u,i flowing from an upstream microservice m_u to a downstream microservice m_i∈ℳ. ℐ denotes a set of cloud infrastructure and req is a set of user requirements. Along with these lines, the research problem has turned the emphasis of studying on privacy-preserving service orchestration – Or more precisely, how to customize a virtual infrastructure and schedule the workflow execution under privacy-preserving constraints while reducing its time and monetary cost? Privacy requirements: Reconstruction of a WSI from distributed resource nodes can lead to finding similar medical images using content-based image retrieval and extrapolating possible patient information from other sources. Therefore, using this distributed scheme, their privacy will be preserved during the process. Bi-objective optimization: Reducing the execution time of the application over the cloud can be crucial for many stakeholders as it can lead to significant cost savings and improve overall processing time for WSI. We aim to reduce the monetary cost f_1 and minimize the application's maximum completion time (i.e., makespan) f_2. Let us denote ET(m_src) and ET(m_snk) as the execution time of splitting and aggregation services over the trusted server. m_det for the artifact detection microservices, which will be deployed to the cloud where ET(m_det(A_e,k, 𝒟_e,k), ℐ_k) denotes its total execution time. Then, the bi-objective optimization problem can be formulated as follows: min f_1 = ∑_k=1^K ET(m_det(A_e,k, 𝒟_e,k), ℐ_k) × p_k × x_k min f_2 = ET(m_src (𝒟))+makespan(m_det) +ET(m_snk(ℛ)) Here K and p_k represent the number of the split data sets and the unit price of cloud infrastructure ℐ_k, subject to, makespan(m_det) = max{ET(m_det(A_e,k, 𝒟_e,k), ℐ_k)× x_k} ET(m_det(A_e,k, 𝒟_e,k), ℐ_k) >0 x_k= 1, if m_det is mapping to ℐ_k, 0, otherwise. § DISCUSSION AND FUTURE WORK This work-in-progress paper presents the methodology for privacy-preserving task-based parallel applications for distributed cloud environments. Our method enables domain-specific users to handle gigapixel medical images efficiently, maintaining privacy among distributed nodes. In future work, we will develop prototypes and demonstrate the benefits of our pipeline using datasets from different hospitals and integrating the method with a Jupyter-based virtual research environment. § ACKNOWLEDGMENT This work has been funded by the European Union project CLARIFY (860627), ENVRI^FAIR (824068), BlueCloud-2026 (101094227) and LifeWatch ERIC. IEEEtran
http://arxiv.org/abs/2307.04973v1
20230711022745
SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image
[ "Guoyao Deng", "Ke Zou", "Kai Ren", "Meng Wang", "Xuedong Yuan", "Sancong Ying", "Huazhu Fu" ]
cs.CV
[ "cs.CV" ]
G.Deng et al. National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Sichuan, China College of Computer Science, Sichuan University, Sichuan, China Institute of High Performance Computing, A*STAR, Singapore SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image Guoyao Deng1, Ke Zou1,3, Kai Ren2, Meng Wang3, Xuedong Yuan2, Sancong Ying2 and Huazhu Fu3 August 12, 2023 ============================================================================================== Recently, Segmenting Anything Model has taken a significant step towards general artificial intelligence. Simultaneously, its reliability and fairness have garnered significant attention, particularly in the field of healthcare. In this study, we propose a multi-box prompt-triggered uncertainty estimation for SAM cues to demonstrate the reliability of segmented lesions or tissues. We estimate the distribution of SAM predictions using Monte Carlo with prior distribution parameters, employing different prompts as a formulation of test-time augmentation. Our experimental results demonstrate that multi-box prompts augmentation enhances SAM performance and provides uncertainty for each pixel. This presents a groundbreaking paradigm for a reliable SAM. § INTRODUCTION Large-scale foundation models are increasingly gaining popularity among artificial intelligence researchers. In the realm of natural language processing (NLP), the Generative Pre-trained Transformer (GPT) <cit.> and ChatGPT, developed by OpenAI, have witnessed rapid growth owing to their exceptional ability to generalize. These models have found applications in diverse domains such as autonomous driving and healthcare. The remarkable generalization capabilities of large models often instill a sense of trust among users; however, their fairness and reliability have also been subject to some degree of scrutiny. Nowadays, there is a growing wave of enthusiasm surrounding computer vision due to the release of the Segment Anything Model (SAM)  <cit.>by Meta AI. SAM has been trained on a massive SA-1B dataset, which consists of over 11 million images and one billion masks, making it an excellent tool. It excels at producing accurate segmentation results from various types of prompts, including foreground/background points, thick boxes or masks, and free-form text. The introduction of SAM has led many researchers to believe that general artificial intelligence has finally arrived. However, some researchers have expressed concerns about the performance of SAM <cit.>. Specifically, they have identified areas such as industrial defect detection <cit.>, camouflaged target detection <cit.>, and tumor and lesion segmentation  <cit.>in medical images where further improvements are needed. Additionally, the reliability of SAM still requires further study. Uncertainty estimation <cit.> is one of the ways to provide reliability for SAM. Previously, uncertainty estimation has demonstrated its reliability and robustness in several medical segmentation tasks <cit.>, including skin lesions and brain tumors <cit.>, among others. The current uncertainty estimation methods can be roughly divided into deterministic-based methods <cit.>, Bayesian Neural Network-based methods <cit.>, ensemble-based methods <cit.>, dropout-based methods <cit.> and test-time augmentation-based methods <cit.>. The focus of this paper is to keep the simplicity and retain the original structure of SAM while achieving pixel-level uncertainty estimation. In Fig.<ref>, we present the eye disc segmentation results<cit.> for both high and low-quality fundus images under different conditions. SAM demonstrates better segmentation results for high-quality images, and the inclusion of different conditions leads to certain performance improvements. However, SAM's segmentation results for lower quality images are not satisfactory. Nevertheless, the inclusion of different conditions greatly enhances its performance, particularly with more accurate box prompts. Furthermore, we have observed a phenomenon wherein different levels of box prompts tend to yield diverse results. This observation motivates us to introduce a novel approach, namely multi-box prompts-induced uncertainty estimation, for medical images. Therefore, the primary focus of this paper is to enhance the segmentation accuracy by employing multiple box prompts. This approach enables us to establish pixel-level reliability through uncertainty estimation. Specifically, we utilize SAM to predict the output distribution using different multi-box prompts. SAM with multi-box prompts generates numerous samples from the predictive distribution. Subsequently, these samples are used to calculate variance, which provide an uncertainty estimation for the medical image segmentation. Our experiments demonstrate that multi-box prompts not only enhances performance on low-quality medical images but also provides uncertainty estimation for them. § METHOD The overall framework of our proposed method is depicted in Fig. <ref>. Our main focus is to enhance the reliability and accuracy of SAM in the context of zero-shot learning. To improve the accuracy of SAM, we incorporate multi-box prompts, which enable us to obtain more precise medical image segmentation results from the distribution. Specifically, we estimate the distribution of SAM predictions using Monte Carlo simulation with prior distribution parameters. This approach allows our method to estimate the aleatoric uncertainty by considering multiple forecasts for a single medical image. §.§ Mask Selection Strategy Under the unprompted setting, SAM generates multiple binary masks and can pop out several potential objects within an input. For a fair evaluation of interesting regions in a specific segmentation task, we follow the strategy of <cit.>to select the most appropriate mask based on its ground-truth mask. Formally, given N binary predictions {y^i}_i^N=1 and the ground-truth G for an input image, we calculate Dice scores for each pair to generate a set of evaluation scores {D^i}_i^N=1. We finally select the mask with the highest Dice score from this set. §.§ SAM with multi-box prompts Prompts can introduce errors into the model's inferring due to their inherent inaccuracies. In order to reduce the influence of the variance of the prompt. We randomize M box prompts B={b^1,b^2,⋯,b^M}. Each box prompt guides SAM generates different segmentation results. Through this strategy, we obtain the predictions Y={y^1,y^2,⋯,y^M} of SAM under different prior cues, and combining them can improve the segmentation accuracy of SAM and reduce uncertainty. The combined prediction is computed as: ŷ = 1/M∑_i = 1^M f_SAM( I,b^i) , where y_C denotes the combined prediction of image I. §.§ Uncertainty estimation of SAM with multi-box prompts Different box prompts cause variances in SAM's segmentation even if they refer to one object in human's view. Inspired by this, our proposed multi-box prompts (MNP) algorithm simulates the annotations of multiple clinical experts to generate the final predictions and uncertainty estimations. To quantify the uncertainty triggered by multi-box prompts. Assume M box prompts B={b^1,b^2,⋯,b^M} that all refers to the ground truth. With M box prompts and input image I, SAM generate a set of predictions Y={y^1,y^2,⋯,y^M}. As shown in Fig. <ref>, We present an uncertainty estimation procedure for multi-box prompts. We first describe aleatoric uncertainty from a single given image I by the entropy <cit.>: U(y^i) = -∫ p(y^i|I) logp(y^i|I) dy, U(y^i) estimates how diverse the prediction y^i for the image I. where y^i = { p__1^i,p__2^i, ⋯ ,p__N^i} denote the prediction pixels. N denotes the unique values in y^i. Then, We run a Monte Carlo simulation using multi-box prompts to obtain a set of predictions. Therefore, the uncertainty distribution is approximated as follows: U(Y|I) ≈∑_i = 1^M ∑_j = 1^N p_j^ilog p_j^i, § EXPERIMENTS AND RESULTS Two different methods are utilized to perform image degradation to verify the reliability of SAM. In this section, we will describe our evaluation protocols, compare the performance of SAM under different quality datasets, and visualize the qualitative results on fundus image segmentation. §.§ Evaluation Protocols ∙ Dataset. We chose the sub-task of the REFUGE Challenge  <cit.>, which does the segmentation of the optic cup and disc in fundus photographs. For simplicity's sake, we consider disc and cup as one category. In order to evaluate the reliability of SAM more objectively, we artificially constructed low-quality data based on high-quality source data by two different methods, which is introducing Gaussian noise with various levels of standard deviations (σ) and the realistic degradation model proposed by Shen et al. <cit.>, respectively. ∙ Metrics. We use four commonly-used metrics for the evaluation: dice score (Dice), expected calibration error (ECE) <cit.>, structure measure (Sm) <cit.> and weighted F-measure (wFm) <cit.>. §.§ Quantitative Evaluation As shown in Table  <ref>, we present different segmentation results of SAM modes using high-quality medical images. Initially, we compare the segmentation results of SAM in "everything" mode and SAM in "box" mode on normal medical images. It was found that the results using SAM in "box" mode were superior. Moreover, with the introduction of our algorithm, the performance of SAM improved further. Table  <ref> and Table  <ref> demonstrate various segmentation results of SAM modes under Gaussian noise and degraded medical images. We compare the results obtained from the aforementioned SAM modes. The performance of SAM in "everything" mode and SAM in "box" mode has declined, whereas the performance of SAM with "multi-box" mode remains at a certain level, with a lower ECE index. Therefore, it can be concluded that the inclusion of multi-box prompts enhances the accuracy and reliability of SAM. §.§ Qualitative Comparison As shown in Fig. <ref>, we first show the uncertainty estimation results under SAM with multi-box mode. As can be seen from it, the periphery of the eye disc is clearly marked as an area of uncertainty. Furthermore, we compare the segmentation results of different modes of SAM under normal and degraded medical images, as shown in Fig. <ref>. In SAM with everything mode, it is difficult to segment the eye disc. Under the box prompt, the eye disc can be segmented under normal conditions, but the results under Gaussian noise and degraded images are not satisfactory. While our method also achieves better segmentation results in degraded images and provides weights for uncertain pixels. This opens a new paradigm for SAM towards robust and reliable medical image segmentation. § DISCUSSION AND CONCLUSION In this paper, we investigated the segmentation performance of SAM on fundus images. The results have shown that box prompt significantly improve the segmentation, but different box prompts lead to variations in predictions. The main method proposed in this paper, prompt augmentation, can help estimate the variations by aleatoric uncertainty and produce an uncertainty distribution map that highlights challenging areas for segmentation. The uncertainty map not only improves the segmentation process and final results but also enables the development of more advanced methods for segmenting fundus images. Moreover, the uncertainty map offers valuable guidance in areas where manual annotation is required. The feature of using the uncertainty distribution map for guiding segmentation and improving accuracy is noteworthy. Furthermore, the uncertainty map can help identify potential segmentation errors and support further analysis, providing useful information for clinicians. IEEEtran
http://arxiv.org/abs/2307.07389v1
20230714145844
Learning Sparse Neural Networks with Identity Layers
[ "Mingjian Ni", "Guangyao Chen", "Xiawu Zheng", "Peixi Peng", "Li Yuan", "Yonghong Tian" ]
cs.LG
[ "cs.LG" ]
M. Ni et al. Peking University, Beijing 100871, China {sccdnmj, gy.chen, pxpeng, yuanli-ece, yhtian}@pku.edu.cn Peng Cheng Laboratory, Shenzhen 518055, China [email protected] Learning Sparse Neural Networks with Identity Layers Mingjian Ni1 Guangyao Chen1 Xiawu Zheng2 Peixi Peng1 Li Yuan1 Yonghong Tian1 () =================================================================================== The sparsity of Deep Neural Networks is well investigated to maximize the performance and reduce the size of overparameterized networks as possible. Existing methods focus on pruning parameters in the training process by using thresholds and metrics. Meanwhile, feature similarity between different layers has not been discussed sufficiently before, which could be rigorously proved to be highly correlated to the network sparsity in this paper. Inspired by interlayer feature similarity in overparameterized models, we investigate the intrinsic link between network sparsity and interlayer feature similarity. Specifically, we prove that reducing interlayer feature similarity based on Centered Kernel Alignment (CKA) improves the sparsity of the network by using information bottleneck theory. Applying such theory, we propose a plug-and-play CKA-based Sparsity Regularization for sparse network training, dubbed CKA-SR, which utilizes CKA to reduce feature similarity between layers and increase network sparsity. In other words, layers of our sparse network tend to have their own identity compared to each other. Experimentally, we plug the proposed CKA-SR into the training process of sparse network training methods and find that CKA-SR consistently improves the performance of several State-Of-The-Art sparse training methods, especially at extremely high sparsity. Code is included in the supplementary materials. § INTRODUCTION Deep Neural Networks (DNNs) achieve great success on many important tasks, including but not limited to computer vision and natural language processing. Such accurate solutions highly rely on overparameterization, which results in a tremendous waste of resources. A variety of methods are proposed to solve such issues, including model pruning <cit.> and sparse training <cit.>. Sparse training aims to train a sparse network from scratch, which reduces both training and inference expenses. A recent study <cit.> shows the close relation between overparameterization and interlayer feature similarity (i.e. similarity between features of different layers, as shown in Figure <ref> ). Specifically, overparameterized models possess obviously greater similarity between features of different layers. Concluding from the facts above, we know that both interlayer feature similarity and network sparsity are deeply related to overparameterization. Inspired by this, we utilize the interlayer feature similarity to increase network sparsity and preserve accuracy at a high level, namely by adopting similarity methods to solve sparsity problems. Following this path, we survey similarity measurements of features, including Canonical Correlation Analysis (CCA) <cit.> and Centered Kernel Alignment (Linear-CKA and RBF-CKA) <cit.>, etc. Among these measurements, CKA measurement is advanced and robust, for it reliably identifies correspondences between representations in networks with different widths trained from different initializations. Theoretically, CKA measurement has many good properties, including invariance to orthogonal transform and isotropic scaling, and close correlation with mutual information <cit.>. The advantages of CKA make it possible to propose robust methods to solve sparsity problems with interlayer feature similarity. To this end, we propose CKA-based Sparsity Regularization (CKA-SR) by introducing the CKA measurement into training loss as a regularization term, which is a plug-and-play term and forces the reduction of interlayer feature similarity. Besides, we further prove that the proposed CKA-SR increases the sparsity of the network by using information bottleneck(IB) theory <cit.>. Specifically, we mathematically prove that our CKA-SR reduces the mutual information between the features of the intermediate and input layer, which is one of the optimization objectives of the information bottleneck method. Further, we prove that reducing the mutual information above is equivalent to increasing network sparsity. By these proofs, we demonstrate the equivalence of reducing interlayer feature similarity and increasing network sparsity, which heuristically investigates the intrinsic link between interlayer feature similarity and network sparsity. To validate the proposed CKA-SR, we conduct experiments on several advanced sparse training methods, such as Lottery Ticket Hypothesis (LTH) <cit.>, Gradient Signal Preservation (GraSP) <cit.>, Dual Lottery Ticket Hypothesis (DLTH) <cit.>, and Random Sparse Training <cit.>. Specifically, we introduce our CKA-SR regularization to the training process of these sparse training methods and thus achieve consistent performance gains across these methods. Moreover, we introduce CKA-SR to the training and finetuning process of network pruning methods such as l1-norm filter pruning <cit.>, non-structured weight-level pruning <cit.>, and knapsack channel pruning <cit.>, and thus achieve performance improvements. In short, CKA-SR boosts the performance of sparse training and network pruning methods. Appendix and codes are included in the supplementary materials. See them in https://anonymous.4open.science/r/Learning-Sparse-Neural-Networks-with-Identity-Layers-9369https://anonymous.4open.science/r/Learning-Sparse-Neural-Networks-with-Identity-Layers-9369. Our contributions are four-fold: * We heuristically investigate the intrinsic link between interlayer feature similarity and network sparsity. To the best of our knowledge, we are the first to find that reducing interlayer feature similarity directly increases network sparsity. * Theoretically, we prove the equivalence of interlayer feature similarity reduction, interlayer mutual information reduction, and network sparsity increment. * We proposed Identity Layers Regularization (ILR) with few-shot samples increases network sparsity and weakens overparameterization by explicitly reducing interlayer feature similarity. Specifically, we implement ILR as CKA-SR. * Experimentally, our CKA-SR regularization term increases network sparsity and improves the performance of multiple sparse training methods and several pruning methods. § RELATED WORKS AND PRELIMINARIES §.§ Centered Kernel Alignment Here we provide the formalization of Centered Kernel Alignment (CKA). For the feature map X∈ℝ^n× p_1 and feature map Y∈ℝ^n× p_2 (where n is the number of examples, while p_1 and p_2 are the number of neurons), we use kernels k and l to transform X and Y into K and L matrices, where the elements are defined as: K_ij = k(x_i, x_j), L_ij = l(y_i, y_j). Further, the formalization of CKA-based similarity measurement ℱ of K and L matrices could be formulated as: 𝐂𝐊𝐀(K,L) = HSIC(K,L)/√(HSIC(K,K)HSIC(L,L)) where HSIC is the empirical estimator of Hilbert-Schmidt Independence Criterion <cit.>. Then, the formalizations of CKA-based similarity measurement for linear kernel k(x, y) = x^Ty is as follows: 𝐂𝐊𝐀_Linear(X,Y) = ||Y^TX||_F^2/||X^TX||_F||Y^TY||_F §.§ Interlayer feature similarity of overparameterized models Nguyen et al. <cit.> investigate the relationship between overparameterized models and similar feature representations. Specifically, wide ResNets, deep ResNets and ResNets trained on small datasets possess extremely similar feature representations between adjacent layers, named block structure. Then they infer an empirically verified hypothesis that overparameterized models possess similar feature representations. Besides, similar observations also appear in ViT <cit.> based architectures. We may conclude that such block structure is a common problem in different architectures. This prompts us to explore the potential benefits of reducing interlayer feature similarity and learning sparse neural networks with identity layers. § METHODOLOGY §.§ Sparsity regularization based on Centered Kernel Alignment As discussed above, the interlayer feature similarity of overparameterized models motivates us to learn sparse neural networks with identity layers. We choose Centered Kernel Alignment (CKA) as the basis of our method, for it's widely applied to measuring feature similarity of different layers. On the other side, the high similarity of layers indicates the overparameterization of Deep Neural Networks. Hence, CKA similarity measurement could be regarded as a scale of overparameterization. This reminds us of directly reducing this measurement to solve overparameterization problems. Even more remarkable, CKA owns many excellent properties, including robustness, invariance to orthogonal transformation, and invariance to scale transformation. These properties make CKA ideal for designing a regularization term to solve overparameterization problems. Specifically, we add a CKA-based regularization term to the training loss function. For a model with empirical loss (cross-entropy loss) ℒ_ℰ, the training loss with CKA-SR is formalized as: ℒ = ℒ_ℰ + ℒ_𝒞 = ℒ_ℰ + β·∑_s=1^S∑_i=0^N_s∑_j=0, j≠ i^N_s w_ij𝐂𝐊𝐀_Linear(X_i,X_j) where ℒ_𝒞 is CKA-SR and β is the weight of ℒ_𝒞. S is the number of stages in the network. For networks with only one stage such as DeiTs, N_s is the total number of layers. And for networks with several stages such as ResNets, N_s is the number of layers in each stage s. w_ij is the weight of CKA measurement between the i^th and the j^th layer, and it's optional. X_0 is the input representation and X_i is the output representation of the i^th layer. The ℒ_𝒞 part in Eq.(<ref>) forcibly reduces the sum of the pairwise similarity of all layers in the network, i.e. forcibly reduces the interlayer similarity of the network. §.§ Theoretical analysis §.§.§ Approximate sparsity. To further explore the relationship between the Frobenius norm of weight matrix and network sparsity, we expand sparsity to approximate sparsity. We define ϵ-sparsity (i.e., approximate sparsity) of a neural network as follows: S_ϵ = |{w|w ∈𝕎 |w|<ϵ}|/|𝕎| where ϵ is a number close to zero, 𝕎 is the set consisting of all parameters of the network's weight matrix, |𝕎| is the total number of parameters, and {w|w ∈𝕎 |w|<ϵ} is the set consisting of small parameters (i.e., parameters with an absolute value smaller then ϵ) of the weight matrix. In Eq. (<ref>), S_ϵ represents the proportion of network parameters that approach 0. We define this as ϵ-sparsity of the network. Further, we prove that ϵ-sparsity and sparsity (i.e., proportion of network parameters that equal 0) of neural networks are approximately equivalent in practice. Our theory is formulated as Theorem <ref>. See the detailed proof of Theorem <ref> in the Appendix. The ϵ-sparsity and the sparsity of neural networks are approximately equivalent. §.§.§ Information bottleneck. The information bottleneck (IB) theory proposed by Tishby et al. <cit.> is an extension of the rate distortion theory of source compression. This theory shows a trade-off between preserving relevant label information and obtaining efficient compression. Tishby et al. <cit.> further research the relationship between information bottleneck theory and deep learning. They interpret the goal of deep learning as an information-theoretic trade-off between compression and prediction. According to the principles of information bottleneck theory, for a neural network Y = f(X) with input X and output Y, the best representation of intermediate feature map X̂ captures the relevant features and ignores the irrelevant features (features that have little contribution to the prediction of Y) at the same time. This process is called "compression". One of its minimization objectives is as follows: L = I(X;X̂) - α I(X̂;Y) where I(X;X̂) is the mutual information between input X and intermediate representation X̂, I(X̂;Y) is the mutual information between intermediate representation X̂ and output Y, and α is a weight parameter for adjusting their proportions. §.§.§ Minimizing the mutual information. Firstly, we prove that our CKA-SR is continuous and optimizable in Theorem <ref>, which makes it possible to minimize CKA-SR in machine learning. See the detailed proof of Theorem <ref> in the Appendix. Then we prove that minimizing CKA-SR minimizes the mutual information R = I(X;X̂) between the intermediate and input representation. Besides, the α I(X̂;Y) part of Eq. (<ref>) is implicitly optimized through the cross entropy loss ℒ_ℰ. Thus, we prove that our method minimizes the optimization objective in Eq. (<ref>), i.e., our CKA-SR method conforms to the principles of information bottleneck theory, and it's beneficial to the representation compression process. Our theory is formulated as Theorem <ref>. ℒ_𝒞 is continuous and optimizable. Minimizing ℒ_𝒞 minimizes the mutual information R = I(X;X̂) between intermediate representation X̂ and input representation X. To prove Theorem <ref>, we first review Lemma <ref> and Lemma <ref> from <cit.> as follows. Following  <cit.>, we assume that X ∼𝒩(0, Σ_X) and Y ∼𝒩(0, Σ_Y), i.e., feature maps X and Y follow Gaussian distribution. Minimizing the distance between X^TY and zero matrix is equivalent to minimizing the mutual information I(X; Y) between representation X and Y. Minimizing 𝐂𝐊𝐀_Linear(X, Y) is equivalent to minimizing I(X; Y). These two lemmas illustrate the relationship between the CKA similarity measurement and information theory. That is, minimizing the CKA similarity between two feature representations is equivalent to minimizing the mutual information between them. Based on these two lemmas, we prove Theorem <ref>. See the detailed proof of the two lemmas and Theorem <ref> in the Appendix. Theorem <ref> connects CKA-SR with information bottleneck theory. In short, minimizing CKA-SR is equivalent to optimizing the optimization objective I(X;X̂) of information bottleneck theory. §.§.§ Increasing the sparsity of neural networks. Further, starting from the information bottleneck theory, we prove that CKA-SR increases the network sparsity, formulated as Theorem <ref>. Minimizing R = I(X; X̂) ⇔ Minimizing ||W||_F^2 ⇔ Increasing the approximate sparsity of network ⇔ Increasing network sparsity. According to Theorem <ref>, CKA-SR minimizes R = I(X;X̂) for any X. Further, combining this with Lemma <ref>, for any X, CKA-SR minimizes the distance between X^TX̂ and 0 matrix. For a fully-connected layer, we have X̂ = W^TX+b. Hence, due to the discussions above, we have: for any X, CKA-SR minimizes the distance between X^T(W^TX+b) = X^TW^TX+X^Tb and 0 matrix. We take an orthogonalized X. Due to the unitary invariance (i.e., orthogonal invariance in the real number field) of Frobenius norm, ||W||_F^2 equals to ||X^TW^TX||_F^2. Therefore, minimizing the distance between X^TW^TX+X^Tb and 0 matrix is equivalent to minimizing ||X^TW^TX||_F^2 and further equivalent to minimizing ||W||_F^2. The above minimization of ||W||_F^2 minimizes the norm of parameter values in weight matrix W, thus making the values more concentrated around 0 value. This increases the network's approximate sparsity (defined earlier in this article). Further, according to Theorem <ref>, the approximate sparsity and sparsity are approximately equivalent. So we prove that the above minimization of ||W||_F^2 increases the network sparsity. Theorem <ref> connects the optimization objective of information bottleneck theory with network sparsity, thus connecting CKA-SR with network sparsity. In short, CKA-SR models are more sparse. We validate this conclusion with our experimental results. Fig.<ref> compares parameter distribution between CKA-SR and baseline models. It's evident that the absolute value of CKA-SR network parameters is more concentrated around 0. § EXPERIMENTS §.§ Implementations §.§.§ Datasets and backbone models. We validate the effectiveness of our CKA-SR method on image classification, network pruning, and advanced sparse training. We use ResNet18, ResNet20, ResNet32 and ResNet50 <cit.> as backbones to conduct extensive experiments on CIFAR-10, CIFAR-100 and ImageNet datasets. §.§.§ Implementations. We implement our CKA-SR as a regularization of the loss function. We develop a plug-and-play CKA-SR class in PyTorch and plug it into various pre-training and sparse training codes. Because CKA-SR is a regularization of layerwise parameters instead of feature maps themselves, we could utilize few-shot samples of each batch (generally 8 samples when the batch size is 128 or 256) to compute CKA-SR. This reduces the computational complexity, thus reducing training expenses. Precisely, we strictly follow the experimental settings of the pruning <cit.> and sparse training methods <cit.> and make fair comparisons with them using CKA-SR. The total number of epochs, batch size, optimizer, weight decay, and learning rates all stay the same with the methods to be compared with. §.§ Pre-Training with CKA-SR As previously proved, our CKA-SR increases network sparsity. So we validate the performance of CKA-SR in network pruning tasks. We directly prune models pre-trained with CKA-SR on large-scale datasets such as ImageNet. We carry out experiments on several pruning methods and find that our method is effective. As shown in Figure <ref>, at the same pruning ratio, CKA-SR models outperform baseline models. §.§.§ Structured pruning. Following the setting of  <cit.>, we perform filter pruning on models pre-trained with CKA-SR without finetuning. Specifically, we prune the filter according to the L1-Norm. The relationship between the pruning ratio and performance is shown in Figure <ref>. When a few filters are pruned, the performance reduction of CKA-SR models is significantly smaller than that of baseline models. As a State-Of-The-Art method for channel pruning, we perform Knapsack channel pruning <cit.> on models pre-trained with CKA-SR and achieve higher classification accuracy. The results of Knapsack pruning (w/o finetuning) are shown in Figure <ref>. When a few channels are pruned, the performance reduction of CKA-SR models is much smaller than that of baseline models, which means CKA-SR models possess much higher sparsity. §.§.§ Non-structured pruning. We perform non-structured weight-level pruning <cit.> according to the absolute values of individual weights and compare the performance between baseline ResNet models and pre-trained ResNets with CKA-SR. The relationship between pruning ratio and performance is shown in Figure <ref>. It could be concluded that when massive weights are pruned, the performance reduction of CKA-SR models is smaller than that of baseline models. Generally, pre-trained models with CKA-SR outperform baseline models in both structured and non-structured pruning methods. §.§ Sparse network training with CKA-SR We conduct extensive experiments on several State-Of-The-Art sparse training methods. For fair comparisons, our experiments follow the same settings and backbones of these methods <cit.>. Note that we conduct experiments on extremely high sparsity (such as 99.8%) settings in GraSP <cit.>, Random sparse training <cit.>, and DLTH <cit.>. From Table <ref>, we can find that CKA-SR consistently improves the performance at different levels of sparsity ratios in LTH <cit.>, GraSP <cit.>, Random sparse training <cit.>, and DLTH <cit.>. §.§.§ LTH. Lottery Ticket Hypothesis (LTH) <cit.> is proposed to train a sparse network from scratch, which states that any randomly initialized dense network contains sub-networks achieving similar accuracy to the original network. We plug our CKA-SR into the training process of LTH. We use the code implemented for LTH by <cit.>, adopt ResNet32 as the backbone, and apply sparsity ratios from 0.70 to 0.98 for fair comparisons. The results are given in the first row of Table <ref>. §.§.§ GraSP. Gradient Signal Preservation (GraSP) <cit.> proposes to preserve the gradient flow through the network during sparse training. We plug our CKA-SR into the sparse training process of GraSP, adopt ResNet32 as the backbone, and apply sparsity ratios from 0.70 to 0.998. The results are given in the second row of Table <ref>. §.§.§ Random sparse training. As one of the newest and State-Of-The-Art sparse training methods, it has been proven that sparse training of randomly initialized networks can also achieve remarkable performances <cit.>. We plug our CKA-SR into the random sparse training process, adopt ResNet20 as the backbone, and apply sparsity ratios from 0.70 to 0.998. The results are given in the third row of Table <ref>. §.§.§ DLTH. As one of the newest and State-Of-The-Art LTH-based sparse training methods, Dual Lottery Ticket Hypothesis (DLTH) <cit.> proposes to randomly select subnetworks from a randomly initialized dense network, which can be transformed into a trainable condition and achieve good performance. We apply our CKA-SR to the training process of the DLTH method, adopt ResNet20 as the backbone, and apply sparsity ratios from 0.70 to 0.998. The results are given in the final row of Table <ref>. As shown in Table <ref>, our CKA-SR can be plugged into multiple sparse training methods and improves the model performance consistently. The CKA-SR is effective consistently at different sparse networks, especially at extremely high sparsity. For GraSP, CKA-SR achieves more than 4.0% of performance improvement at sparsity 99.5% and 6.0% at sparsity 99.8%. §.§ Ablation studies §.§.§ Ablation study of regularization term. Savarese et al. <cit.> develop a regularization-based sparse network searching method named Continuous Sparsification. This method introduces L_0 Regularization into sparse training. We compare our CKA-SR with L_0 Regularization theoretically and experimentally. Theoretically, CKA-SR and L_0 regularization regularize networks from different granularity levels. L_0 regularization regularizes networks from the individual parameter level, while CKA-SR regularizes networks from the layer level. These regularizations from different granularity levels could work together. Experimentally, we conduct sparse training experiments with ResNet18 on CIFAR-10 using the official code of the CS method. We find that our CKA-SR is able to replace L_0 regularization and achieves better performance. Besides, combining CKA-SR and L_0 improves performance by 0.4%, demonstrating that our CKA-SR could cooperate with other regularizations. The results are shown in Table <ref>. §.§.§ Ablation study of hyperparameter β. We conduct the ablation study of hyperparameter β with Random Sparse Training <cit.> method on CIFAR-10 dataset. Taking ResNet20 model at a sparsity of 0.95 and adjusting the weight hyperparameter β of our CKA-SR, we get the results shown in Table <ref>. We conclude that multiple values of hyperparameter β between 1e-05 and 1e-03 increase the performance of sparse networks. However, when the hyperparameter β becomes too large, it would weaken the succession of information through layers, thus causing a reduction in performance. That is to say, there is a trade-off between the identity of layers and the succession of information through layers. In the view of sparsity, there is a trade-off between high sparsity and ideal performance. § CONCLUSION Our work reveals the relationship between overparameterization, network sparsity, and interlayer feature similarity. We thus propose to use the robust and advanced CKA similarity measurement to solve the overparameterization issue. Specifically, we propose a plug-and-play sparsity regularization named CKA-SR which explicitly reduces interlayer similarity. Theoretically, we reveal the equivalence of reducing interlayer similarity and increasing network sparsity, thus proving the CKA-SR increases network sparsity. Experimentally, our CKA-SR consistently improves the performances of several State-Of-The-Art sparse training methods and several pruning methods. Besides, our CKA-SR outperforms former regularization methods. In the future, considering our limitations of expenses to manually select hyperparameters and calculate loss, we will continue to investigate the cooperation of multiple regularizations in sparse training and reduce the expenses of sparse training.
http://arxiv.org/abs/2307.04508v1
20230710120620
Laplace-Transform GW
[ "Johannes Tölle", "Niklas Niemeyer", "Johannes Neugebauer" ]
physics.chem-ph
[ "physics.chem-ph", "physics.comp-ph" ]
Laplace-Transform GW Johannes Tölle^1,[email: [email protected]], Niklas Niemeyer^2,, and Johannes Neugebauer^2[email: [email protected]] ^1Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California 91125, USA ^2Theoretische Organische Chemie, Organisch-Chemisches Institut and Center for Multiscale Theory and Computation, Westfälische Wilhelms-Universität Münster, Corrensstraße 36, 48149 Münster, Germany ^Both authors contributed equally. Date: July 9, 2023 empty Abstract We present a simple and accurate GW implementation based on a combination of a Laplace transformation (LT) and other acceleration techniques used in post-SCF quantum chemistry, namely, natural auxiliary functions and the frozen-core approximation. The LT-GW approach combines three major benefits: (a) a small prefactor for the computational scaling, (b) easy integration into existing molecular GW implementations, and (c) significant performance improvements for a wide range of possible applications. Illustrating these advantages for systems consisting of up to 352 atoms and 7412 basis functions, we further demonstrate the benefits of this approach combined with an efficient implementation of the Bethe–Salpeter equation. INTRODUCTION – After its introduction in 1965 <cit.>, the GW (G: time ordered one-body Green’s function, W: screened Coulomb interaction) method has now become the standard approach for the accurate ab-initio determination of ionization potentials (IPs), electron affinities (EAs) (or more generally quasi-particle energies), and in combination with the Bethe–Salpeter equation (BSE), for excitation energies in condensed matter physics <cit.>. The adoption within the realm of quantum chemistry has been established in recent years <cit.> with the availability of implementations in a wide range of molecular quantum chemistry codes, see e.g., Refs. <cit.>. The success of the GW method is owed to the fact that it offers good accuracy while being computationally feasible for a wide range of systems, c.f. Ref. <cit.>. However, the GW method generally relies on error cancellation, and G_0W_0, in particular, depends on the starting point chosen, the approach used for determining the dielectric function, and the self-consistency scheme chosen for the GW calculation. An excellent overview of the different aspects related to the GW approximation can be found in Ref. <cit.>. Especially the computational cost for determining the screened Coulomb interaction and therefore the G_0W_0 self-energy Σ_0 varies significantly for different practical realizations of the GW method in molecular orbital bases. The “fully-analytic” approach <cit.>, for example, scales as 𝒪(N^6). The scaling can be reduced significantly by numerical integration of the self-energy Σ_0, Σ_0(,,ω) = i/2π∫ dω' e^iω'η G_0(,,ω+ω') W_0(,,ω'), where the non-interacting one-particle Green's function is denoted as G_0 and the screened Coulomb interaction as W_0. To avoid divergences along the real frequency axis <cit.>, the integration in Eq. (<ref>) is commonly performed along the imaginary frequency axis in combination with analytic continuation (AC) to the real frequency axis leading to a formal scaling of 𝒪(N^4) <cit.>. Alternatively, one can employ the so-called contour-deformation approach (CD) <cit.> by dividing the integration in Eq. (<ref>) into an integration along the imaginary frequency axis and the real-frequency axis. The scaling, however, is 𝒪(N^4-5) and depends on the quasi-particles to be determined (see Ref. <cit.>). Σ_0 can also be determined within the space-time formulation of the GW method <cit.>. In this approach, the construction of W_0 is performed in imaginary-time rather than frequency space in combination with additional techniques, among others, real-space grid representation of the Green's function <cit.>, pair atomic density fitting <cit.>, or separable density-fitting <cit.> to reduce the overall scaling to 𝒪(N^3). Note that this ansatz shares certain similarities to Laplace-transform (LT) techniques developed in molecular quantum chemistry <cit.>. One drawback of these methods is, however, related to increasing memory requirements and larger prefactors due to the real-space representation <cit.>, potentially uncontrollable errors introduced by exploiting locality <cit.>, or the necessity to construct specialized real-space grids <cit.>. These aspects also lead to more challenging numerical implementations of these methods, potentially limiting their widespread application. This work demonstrates an alternative efficient evaluation of the GW self-energy by combining different ideas for reducing the computational cost based on the AC-GW formulation. In particular, we make use of a Laplace transformation for the evaluation of W_0, a truncation of the auxiliary basis using natural auxiliary functions (NAF) <cit.> and the frozen-core (FC) approximation. We refer to this approach as LT-GW which is based on three guiding principles: (a) a small prefactor should be preserved, (b) adaptation of existing AC-GW implementations should require minimal effort, and (c) significant performance improvements should result for a wide range of system sizes with controllable error.   THEORY – In the following, a concise overview of the modified GW implementation based on the Laplace-transform (LT) technique is given. More detailed information regarding GW implementations based on imaginary frequency integration can be found in Refs. <cit.>. A diagonal element nm for the correlation part of the screened-Coulomb interaction W^c_nm in a molecular orbital basis for an imaginary frequency iω is calculated as W^c_nm(iω') = ∑_PQ R^P_nm{[1 - Π(iω')]_PQ^-1 - δ_PQ}R^Q_nm, where molecular spin-orbital (ϕ) and auxiliary basis function (χ) indices are given in lowercase and uppercase letters, respectively. Furthermore, i,j,… refer to occupied, a,b,… to virtual, and n,m,… to arbitrary orbitals with eigenvalues ϵ. Π_PQ(iω') is evaluated as Π_PQ(iω') = - 2 ∑_iaR^P_ia(ϵ_a - ϵ_i)/ω'^2 + (ϵ_a - ϵ_i)^2 R^Q_ia, and the transformed three-center integrals R^P_nm are defined as R^Q_nm = ∑_P (nm|P) [𝐕^-1/2]_PQ, with (nm|P) = ∫ d∫ dϕ_n() ϕ_m() χ_P()/| - |, and V_PQ = ∫ d∫ dχ_P() χ_Q()/|-|. In AC-GW, the construction of Π_PQ(iω') is the most time-consuming step, formally scaling as 𝒪(N_oN_vN_aux^2) for each imaginary frequency (N_o being the number of occupied orbitals, N_v the number of virtual orbitals, and N_aux the number of auxiliary functions). Finally, the correlation (dynamical) part of the G_0W_0 self-energy Σ^c is obtained (ϵ_F denotes the Fermi-level) Σ_n^c(iω)= -1/π∑_m ∫_0^∞ d ω' iω + ϵ_F - ϵ_m/(iω + ϵ_F - ϵ_m )^2 + ω'^2 W_nm(iω'), which is integrated numerically using a modified Gauss-Legendre quadrature, see Refs. <cit.>. Quasi-particle energies are then determined by AC of Σ^c to the real frequency axis. For the AC to the real frequency axis, we use a N-point Padé approximation as described in the appendix of Ref. <cit.>. In this work, we make use of the LT for evaluating Π_PQ(iω'). In a first step, the denominator in Eq. (<ref>) is rewritten as 1/ω'^2 + (ϵ_a - ϵ_i)^2 = ∫^∞_0 dτexp(-(ω'^2 + (ϵ_a - ϵ_i)^2)τ) = ∫^∞_0 dτexp(-ω'^2τ) exp(-( ϵ_a - ϵ_i)^2 τ). holding for (ω'^2 + (ϵ_a - ϵ_i)^2) > 0 which is guaranteed to be true. Replacing the denominator with the integral in Eq. (<ref>) allows to apply a numerical integration of the form 1/ω'^2 + (ϵ_a - ϵ_i)^2 ≈ - ∑_m^N_LT w_m exp(-(ω'^2 + (ϵ_a - ϵ_i)^2) x_m) = - ∑_m^N_LT w_m exp(-ω'^2 x_m) exp(-(ϵ_a - ϵ_i)^2 x_m), where the N_LT quadrature points and their corresponding weights are denoted as x_m and w_m, respectively. Factorizing the exponential functions with frequencies and orbital-energy differences as their arguments through the LT allows evaluating their contributions to Π_PQ(iω') separately as Π_PQ(iω') ≈ -2 ∑_m ∑_iaR^P_ia w_m (ϵ_a - ϵ_i) e^-(ϵ_a - ϵ_i)^2 x_m R^Q_ia_M^m_PQ(iω') e^-ω'^2 x_m. In practice, M^m_PQ(iω') is calculated for each quadrature point, which requires N_LT N_oN_vN_aux^2 operations, followed by the outer loop over imaginary frequencies [see Eq. (<ref>)] counting N_LT N_aux^2 N_iω operations. In contrast, the evaluation of Eq. (<ref>) for the determination of quasi-particle energies requires N_iω N_oN_vN_aux^2 operations. It becomes clear that the formal scaling remains unchanged with 𝒪(N^4) since neither N_iω nor N_LT depends on the system size represented by N. A constant speed-up can, however, be expected using the LT technique as long as N_LT < N_iω which is proportional to the ratio N_iω/N_LT. The natural auxiliary function (NAF) approximation <cit.> reduces the size of the three-index integral tensor that commonly appears in post-SCF methodology making use of the resolution of the identity approximation. Its basis is given by a symmetric, positive definite matrix K that reads K_PQ = ∑_nm R^P_nmR^Q_nm. A rank reduction of the three-index integral list is achieved by first diagonalizing K to yield the NAFs labeled by P̃, ∑_Q K_PQ V_Q,P̃ = V_P P̃ϵ_P̃ , followed by setting up a transformation matrix U_PP̃ that only includes NAFs with corresponding eigenvalues above a certain threshold ε_NAF (assembled from the columns of V_P P̃). Finally, the three-center integral tensor is transformed to the NAF space following R^P̃_nm = ∑_P R^P_nm U_PP̃. In the limit of U including all eigenvectors of K, Eq. (<ref>) represents an orthogonal transformation. Our implementation omits the virtual–virtual part of the sum in Eq. (<ref>) due to its unfavorable scaling with the system size. Closed-shell molecules are handled by including a factor of two in Eq. (<ref>) to account for the single set of spatial orbitals. Determining the NAFs formally scales as 𝒪(N_o N_v N^2_aux). The theoretical speed-up of the NAF approximation in AC-GW calculations becomes apparent when inspecting Eqs. (<ref>) and (<ref>). The time-determining step includes an inner product of the three-index integral tensor contracting the occupied–virtual composite index ia. As a result, the expected speed-up scales quadratically with the quotient of the number of original auxiliary basis functions N_aux and the number of NAFs N_NAF, that is, (N_aux/N_NAF)^2.   Quasi-particle energies using LT-G_0W_0 – A detailed overview of the computational details is given in Sec. S1 of the Supporting Information (SI). In the following, we will demonstrate the robustness, scalability, and speed-up of combining AC-G_0W_0 with the LT, NAF, and FC techniques. First, its accuracy is determined for a subset of the GW100 benchmark set <cit.>. Reference orbitals were obtained using the Hartree–Fock approximation throughout. All results are compared to reference quasi-particle (QP) energies based on the “fully-analytic” evaluation of the G_0W_0 self-energy without employing the RI approximation (also for the mean-field calculation) <cit.>. The results of 15 representative molecular systems are explicitly shown here and deviations for the rest of the benchmark set can be found in the SI. Note that we omitted all molecular systems containing very heavy atoms such as iodine and xenon, as well as the rubidium and silver dimers because we restrict ourselves here to a non-relativistic description and do not use effective core potentials in this work. This reduces the total number of systems included in our calculations to 93. The signed error for the HOMO and LUMO QP energies relative to the “fully-analytic” evaluation of the G_0W_0 self-energy without making use of the RI approximation are shown in Tabs. <ref> and <ref>. The approximate treatments include (a) the “fully-analytic” approach using the RI approximation, (b) AC-G_0W_0, (c) AC-G_0W_0 in combination with LT (ε_LT=10^-7), (d) AC-G_0W_0 in combination with FC, (e) AC-G_0W_0 in combination with the NAF approximation (ε_NAF = 10^{-6,-4,-2}), and (f) combining AC-G_0W_0 with LT/NAF/FC (ε_LT=10^-7, ε_NAF = 10^{-6,-4,-2}). Comparing the “fully-analytic” evaluation with and without the RI approximation, a mean absolute error (MAE) of 1.1 meV (HOMO) and 1.6 meV (LUMO) in the quasi-particle energies is found. Virtually identical deviations are obtained for AC-G_0W_0 highlighting its applicability for determining valence G_0W_0 quasi-particle energies. Applying the LT leads to almost identical results with deviations smaller than 0.1 meV, numerically justifying the chosen parameters for the LT quadrature. Introducing additional approximations such as NAF and FC increases the QP errors. However, the overall accuracy for the different thresholds and combinations of the various approximations remains below an MAE of 10.0 meV for both HOMO and LUMO quasi-particle energies with the largest deviation of 29.6 meV for the HOMO quasi-particle energy of vinyl bromide in the case of FC and AC/FC/LT/NAF. As described in the SI, this error originates from the FC for bromine and can readily be reduced to below 5 meV by adjusting the number of frozen core orbitals. Because all systems in the following mainly contain first- and second-row elements (with the exception of WW-6 which is separately benchmarked against non-FC calculations), we continue to use the default number for frozen core orbitals as described in Sec. S1 of the Supporting Information. From the above analysis, it becomes clear that AC-G_0W_0 in combination with a comparatively loose NAF threshold of 10^-2 leads to an almost negligible error. As a result, all further calculations shown in this article will be confined to this threshold. Next, we performed G_0W_0 calculations on water clusters (see Fig. <ref>) of increasing size containing ten to 100 water molecules (corresponding to 430 to 4300 SCF basis functions in a def2-TZVP basis, respectively) and investigate QP energies and computational timings (computational details are given in Sec. S1 of the Supporting Information). The geometries were obtained by first generating a cubic 20× 20× 20 Å^3 water cluster containing 233 water molecules with VMD <cit.>, optimizing it with GFN2-xTB (6.4.1) <cit.> and then including the respective number of molecules closest to the center of mass of the whole cluster. In Fig. <ref>, we display the signed error in QP energies as a function of the number of molecules included in the water cluster for the HOMO and the LUMO for the different approximate strategies employed here as well as a combination thereof. Again, we find that the LT approximation does not introduce significant errors in QP energies for either the HOMOs or the LUMOs. For the NAF approximation (ε_NAF = 10^-2), the error with respect to the reference calculation is constant at about 1.5 meV and 3.0 meV for the HOMO and the LUMO, respectively. For the FC approximation, a constant error of about 3.5 meV and -0.5 meV is observable for the HOMO and the LUMO energies, respectively. While the error of the approximation combining LT, NAF, and FC exceeds the individual errors in the HOMO case (about 4.5 meV), we find partial error cancellation in the LUMO case (about 1.8 meV). Most importantly, however, it can be seen that (a) the error in QP energies is essentially independent of the system size and (b) the magnitude of QP energy errors is within a tolerable range using the approximations and thresholds suggested here (compare SI, Sec. 1). As a next step, we show computational timings of the various G_0W_0 methods. To assess the practical scaling behavior with the system size, we consider a double logarithmic plot of wall-clock timings for the calculation of the screened Coulomb interaction W_0 [see, e.g., Eq. (<ref>)] as a function of the number of SCF basis functions in Fig. <ref>. A non-logarithmic wall-clock timing plot along with the resulting speed-ups can be found in Fig. S2 of the Supporting Information. Taking a look at the corresponding linear fits performed on the data in Fig. <ref>, we find a slope of 3.34 for the unmodified AC-G_0W_0 algorithm, which is only slightly smaller than the formal scaling exponent of four that would be expected for the AC approach. The exponent is reduced by both the FC and NAF approximations to 3.30 and 3.13, respectively, where no such reduction would be expected for the exponent but rather for the prefactor only. Here, we note that the number of NAFs included in the calculations is on average 25–30% lower than the number of original auxiliary basis functions. For the water cluster containing 100 water molecules, the auxiliary-basis size reduction is 26%, which should result in a speed-up of 0.74^-2≈ 1.83, and which is close to the observed speed-up of 2.0. The LT approximation leads to a lowering of the exponent from 3.34 to 2.78. In this case, the expected speed-up should be proportional to the quotient of the original number of imaginary frequencies and the number of Laplace grid points (see Eq. <ref>). For the cluster containing 100 water molecules, this ratio is 128/17 ≈ 7.5 which compares well with the observed speed-up of 6.7. Inspecting the exponents of the two combined approximations LT/NAF as well as LT/NAF/FC, we find that the individual reductions in computational scaling add up so that for LT/NAF/FC the slope of the linear fit (as a measure of the computational scaling) is lowered by almost one with respect to the regular AC-G_0W_0 calculation. For the presented wall-clock timings, it can thus be seen that, although the formal scaling behavior is unchanged by the approximations introduced, LT-G_0W_0 leads to a drastically lower practical computational scaling while retaining a very high degree of accuracy. Additionally, we consider absolute timings of the G_0W_0 and eigenvalue-self-consistent GW (five cycles) calculations for the cluster containing 100 water molecules to illustrate the speed-up that can be expected in practical calculations with moderately sized systems and the LT-G_0W_0 method. The results can be found in Tab. <ref>. It turns out that the speed-ups of the composite approximation LT/NAF/FC are 18.1 and 17.6 for G_0W_0 and evGW, respectively, which slightly exceeds the product of the speed-ups of the individual LT (6.7 and 6.6), NAF (2.0 and 2.1), and FC (1.2 and 1.3) approximations, each amounting to roughly 16. The individual approximations thus do not interfere with each other but can constructively be used in combination, and the respective speed-up directly carries over to (partially) self-consistent GW calculations. Finally, we note that the G_0W_0 calculation using only the LT approximation is about twice as fast as the regular one already for the smallest investigated water cluster containing 10 molecules (10 seconds vs 20 seconds), providing evidence for the small prefactor of LT-GW combined with the NAF and FC approximations. LT-G_0W_0 with BSE – We apply a combination of LT-G_0W_0 and the Bethe–Salpeter (BSE) equation to investigate the effect of the LT approximation on the accuracy of linear absorption spectra. The BSE calculations are performed with the efficient integral-direct resolution of the identity implementation for the Hartree–Fock and long-range exchange part of the response matrix in Serenity originally presented in our work in Ref. <cit.>. As introduced above, the LT-G_0W_0 method refers to the application of the LT, NAF, and FC approximation and will be used in the following. As a first test case, we consider the WW-6 dye relevant in photovoltaics <cit.>. The molecular geometry was taken from Ref. <cit.> and is displayed in Fig. <ref>. Within the def2-TZVP basis set, there are 5583 SCF basis functions as well as 13802 auxiliary basis functions for the GW/BSE part of the calculation. In Fig. <ref>, we compare the linear absorption spectra for the WW-6 system that was obtained with the regular AC-G_0W_0/BSE calculation with the LT-G_0W_0 calculation employing both the NAF (ε_NAF = 10^-2) and the FC approximations. In both cases, eight of the lowest-lying excitation energies and corresponding oscillator strengths were determined. The FC approximation was not applied for the BSE calculations. We find no visible difference between the linear absorption spectra calculated with the regular and the approximate approach. Numerical results for QP energies as well as excitation energies and oscillator strengths can be found in Tabs. <ref> and <ref>, respectively. The mean deviation of QP energies is about 9.6 meV which far exceeds the mean error of excitation energies and oscillator strengths which amount to 0.75 meV and 0.39· 10^-3 a.u., respectively. The occupied and virtual QP energy errors are more systematic for this test system than for the HOMOs and LUMOs of the water clusters investigated beforehand. This results in more favorable error cancellation for excitation energies, which depend on QP energy differences. The errors of the oscillator strengths are equally negligible, which, in turn, is probably a result of the eigenvectors of the BSE problem being largely unaffected because of the error cancellation mentioned above. Inspecting the computational timings (given in Fig. <ref>), we find that in the regular case, the overall wall-clock timings are dominated by the calculation of the screened Coulomb interaction W with 2293 minutes, while in the approximate case, the BSE part of the calculation exceeds the time needed for the GW calculation by far. Here, the overall G_0W_0 calculation time is, in fact, dominated by the preparation of the three-index MO integrals, as the calculation of W only took 103 minutes. We also note that for the approximate calculation, setting up the NAF matrix, diagonalizing it, and then performing the NAF transformation to the three-index integral tensor introduces a small overhead of about 25 minutes (or ten percent), which is summarized in the timings for the “MO Ints”. The number of NAFs included in the calculation was 8755 corresponding to a reduction of 37% with respect to the full number of auxiliary basis functions. The speed-up for the entire calculation amounts to 2.3 (3915 minutes vs 1720 minutes) while the speed-up for the calculation of the screened Coulomb interaction alone is 22.3 (2293 minutes vs 103 minutes). These calculations demonstrate that LT-GW is able to provide accurate references for BSE calculations, while drastically reducing the computational demand of the preceding G_0W_0 calculation. As a second test system, we consider stacks of BODIPY dyes, which are of interest in the field of supramolecular polymer design <cit.>. Additionally, supermolecular BODIPY-based compounds are interesting for GW/BSE calculations in particular because alternative (standard) methods for predicting their absorption spectra may either lack the necessary accuracy (e.g. linear response time-dependent density-functional theory, see e.g. Ref. <cit.>) or are simply not feasible for this kind of system size (e.g. coupled cluster-based methodology such as coupled cluster with singles and approximate doubles <cit.> and even local variants thereof <cit.>). In our calculations, we include monomer, dimer, and tetramer geometries (provided by the authors of Ref. <cit.> and displayed in Fig. <ref>) and compare our G_0W_0/BSE-based spectra with experimental ones in Fig. <ref>. For all n-mers, 32 of the lowest-lying excitation energies and corresponding oscillator strengths were determined after calculating 20 of both the lowest-lying virtual and highest-lying occupied QP energies for each monomer in each geometry, that is, 40 for the dimer as well as 80 for the tetramer. Based on the findings of the approximate calculations for the WW-6 test system, we omit G_0W_0 calculations that do not apply any further approximations here. The experimental spectra exhibit three main bands at about 600, 400, and 300 nm. Interestingly, a strong blue shift of, in particular, the energetically lowest-lying absorption band is observed upon aggregation (experimentally induced by lowering the solution temperature). This behavior can most likely be attributed to the corresponding interaction of the transition dipole moments of the monomers in this stacking pattern. Going over to the computed spectra, one finds that the monomer spectrum reproduces the position and intensity of the experimental bands with a high degree of accuracy (given a constant shift of the absorption spectrum of 0.48 eV). It can further be seen that the blue shift of the lowest-lying absorption band of the dimer compares well with the experimental one. The computed tetramer spectrum exhibits a blue shift far exceeding the experimental one. This is most likely due to a combination of different factors. On the one hand, the experimental spectrum is a combination of several different aggregates of varying sizes and particular arrangements. On the other hand, the tetramer geometry was obtained by stacking two dimers on top of each other followed by a reoptimization. As a result, the distance between the inner two monomers is smaller than the distance between the outer pairs which could lead to an overestimation of the excitonic couplings leading to the blue shift. The GW calculation (screened Coulomb interaction W) took 6, 70, and 813 minutes for the monomer, dimer, and tetramer, respectively.   CONCLUSION – We have presented the LT-GW method, for which we numerically demonstrated that it follows our three main objectives: (a) a small prefactor, (b) minimal effort for adaptation in existing AC-GW codes, and (c) significant performance improvements (up to 22-fold) for a wide range of system sizes with controllable error. For this, LT-GW combines the GW approximation in the context of the analytic continuation (AC) approach with a Laplace transformation (LT), natural auxiliary functions (NAFs), and the frozen-core (FC) approximation. We have highlighted its synergy with the BSE for calculations of excitation energy and properties for extended systems consisting of up to 7412 basis functions. We are convinced that the LT-GW method constitutes a practical and widely applicable extension to existing GW implementations for molecular systems. In the LT-G_0W_0/BSE calculations, we have shown that the computational time is now dominated by the BSE calculation. Based on our three guiding principles, we aim to achieve similar improvements also for the BSE in the future by making use of, for example, minimal auxiliary basis sets <cit.> or simplified integrals <cit.>.   Computational details, additional analysis of quasi-particle energies for atoms and molecules from the GW100 benchmark set as well as non-logarithmic wall-clock-timings and the speed-up plot of the water clusters can be found in the Supporting Information. J.T. gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through DFG-495279997. N.N. and J.N. gratefully acknowledge funding by the DFG through SFB 1459 (Project A03, Project-ID A03-433682494). We would like to thank Christian Mück-Lichtenfeld for providing the monomer, dimer, and tetramer BODIPY geometries originally presented in Ref. <cit.>. We would like to thank Alexander Rödle and Gustavo Fernández for providing the raw data of the experimental absorption spectra originally presented in Ref. <cit.>. The data supporting the findings of this study are available either within the supplementary material or upon reasonable request from the authors.
http://arxiv.org/abs/2307.04337v1
20230710042906
Detection of temporal fluctuation in superconducting qubits for quantum error mitigation
[ "Yuta Hirasaki", "Shunsuke Daimon", "Toshinari Itoko", "Naoki Kanazawa", "Eiji Saitoh" ]
quant-ph
[ "quant-ph" ]
]Detection of temporal fluctuation in superconducting qubits for quantum error mitigation Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan. [Author to whom correspondence should be addressed: ][email protected] Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan. Quantum Materials and Applications Research Center, National Institutes for Quantum Science and Technology (QST), Tokyo 152-8550, Japan. IBM Quantum, IBM Research-Tokyo, 19-21 Nihonbashi Hakozaki-cho, Chuo-ku, Tokyo, 103-8510, Japan. Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan. Institute for AI and Beyond, The University of Tokyo, Tokyo 113-8656, Japan. WPI Advanced Institute for Materials Research, Tohoku University, Sendai 980-8577, Japan. Institute for Materials Research, Tohoku University, Sendai 980-8577, Japan. We have investigated instability of a superconducting quantum computer by continuously monitoring the qubit output. We found that qubits exhibit a step-like change in the error rates. This change is repeatedly observed, and each step persists for several minutes. By analyzing the correlation between the increased errors and anomalous variance of the output, we demonstrate quantum error mitigation based on post-selection. Numerical analysis on the proposed method was also conducted. [ Eiji Saitoh August 12, 2023 =================== Over the last few decades, there has been a growing trend towards developing quantum computers and advances in quantum engineering technologies are overwhelming <cit.>. Among diverse materials or artificial atoms proposed to serve as quantum bits (qubits), superconducting qubits<cit.> are one of the most promising candidates. A number of studies have been conducted to improve the performance of superconducting qubits and several breakthroughs have been achieved<cit.>. Nevertheless, even the state-of-the-art qubits unpredictably interact with the surrounding environments and suffer from noise during computation, which places a critical limit on their computational abilities<cit.>. Several attempts have been made to identify microscopic pictures of unexpected interactions and improve the device's performance <cit.>. Recent evidence suggests that superconducting qubits exhibit a temporal change in their coherence times under a continuous measurement <cit.>. Qubit instability poses a serious threat to quantum computers. A sudden decrease in the qubit lifetime can temporarily degrade the device's performance. In addition, most of the current quantum error mitigation (QEM) techniques <cit.> are unable to mitigate time dependent noise<cit.>, and a temporal change in decoherence calls for re-learning of a noise model or developing more sophisticated QEM techniques. Therefore, it is imperative to investigate the dynamics of a superconducting qubit system and assess its stability. In this paper, we report a temporal change in the qubit errors in a superconducting quantum computer. We also developed an anomaly detection method for a temporal change in errors. All the experiments were performed on , which is one of the IBM Quantum systems. This quantum computer has 27 transmon qubits and the readout assignment errors are around 1% on average. The energy relaxation times of the qubits are approximately 1.2× 10^2 μ s on average, with the phase damping times around 1.2× 10^2 μ s. We iterate a same quantum circuit and a subsequent measurement for L times at a sampling rate of several hundred microseconds. As a result, we obtain a binary sequence 𝐗∈{0, 1}^L. To estimate the qubit output fluctuations, we transform a subsequence of 𝐗 with size N into a fluctuation indicator S, which is defined by S = 1/m-1∑_j = 1^m(Y_j - Y)^2/Y ( 1 - Y)/n., where Y_j = 1/n∑_i = (j-1)n + 1^jnX_i, and Y = 1/m∑_j = 1^mY_j with some integers n and m that satisfy the condition N = nm≪ L. In the experiments below, we obtain a time series of S from the entire sequence 𝐗∈{0, 1}^L using the following procedure. We first take the average of every n data to obtain a time series 𝐘 with the length M = ⌊L/n⌋. We then calculate the time series 𝐒 from 𝐘 by applying a sliding window of size m, and thus the length of 𝐒 is given by l = M- m + 1. The indicator S is introduced based on the following background. From the Born's rule, the measurement outcome X_i in the i-th measurement is a random variable whose distribution is given by the binomial distribution B(1, P_1), where P_1 denotes the probability of measuring the excited state. The average Y_j is also a random variable whose probability distribution is determined by the binomial distribution B(n, P_1). Thus, the expectation value of the sample mean Y = 1/m∑_j = 1^mY_j is equal to P_1, and that of the unbiased sample variance V_samp = 1/m-1∑_j = 1^m(Y_j-Y)^2 is equal to P_1(1 - P_1)/n. Since P_1 is unknown, we estimate the expected variance with V_bi = Y(1-Y)/n, and S is given by the ratio of V_samp and V_bi in Eq. (<ref>). Intuitively, S quantifies the extent to which the sample variance deviates from what is expected under the assumption that {X_i}_i are generated from an identical binomial distribution. S can be used to detect a temporal change in qubit errors and exclude abnormal outcomes in quantum computing as discussed later. Note that S is a random variable obtained from the random variables X_1, X_2, …, X_N and S takes several values with different probabilities. The probability distribution of S is well described by the chi-squared distribution with (m-1) degrees of freedom and the mean of S is given by 1 with the variance σ^2 = 2/m-1, whose rigorous derivation is provided in the latter part of this letter. Thus, when we calculate S from an experimental result (for clarity we represent the experimental value as S_exp and use S_theo when we describe a stochastic characteristic of S), S_exp should spread randomly around 1 with the statistical fluctuation σ = √(%s/%s)2m-1. If S_exp significantly deviates from the probabilistic behavior of S_theo, we reject the hypothesis that the binary data X_1, X_2,… ,X_N are generated from an identical binomial distribution B(1, P_1) and the data are classified as anomalous in our QEM method. First, we performed a one-qubit continuous measurement on the IBM quantum processor. The pulse sequence is depicted in Fig. <ref>(a). The qubit is initialized to the ground state with the reset pulse, excited with the π pulse, and then measured. We repeated this pulse sequence for 1000 seconds with the repeat delay time τ≈ 6× 10^2 μ s to record normal and abnormal behavior in a single set of experimental data. The time series of S_exp defined by Eq. (<ref>) was calculated from the obtained outcomes with the parameters n = m = 128 and L = 1787904. Figure <ref>(b) illustrates the time series of S_exp. The value of S_exp remains almost constant for the first 230 s. This behavior is consistent with the fact that the expectation value of S_theo is equal to 1 with the standard deviation σ≈ 0.125. In the next moment, however, S_exp abruptly increases to approximately 4 [see the red band in Fig. <ref>(b)], which is 24 standard deviations above the mean, and this cannot be explained in terms of the statistical error. This increase persists for 110 seconds, and sharp switching behavior is repeatedly observed in the rest of the record as visualized by the four red bands in Fig. <ref>(b). This phenomenon is observed repeatedly in other experiments on . Figure <ref>(c) compares the error rates in two time periods. The red bar represents 1 - P_1 in the time period from 430 s to 720 s, while the black bar shows that from 870 s to 1000 s, where P_1 denotes the average of the binary outcomes and should be 1 in the absence of errors. The temporal increase in S_exp appears to be closely related to a temporal increase in errors. This correlation between S_exp and errors suggests that we can reduce errors by classifying obtained outcomes based on the values of S_exp and eliminating the anomalous outcomes. Based on this, we propose a QEM technique based on post-selection (or we also call it an anomaly detection). We first compute the time series 𝐒_exp from an obtained binary sequence 𝐗. Then, we compare each element of 𝐒_exp against a threshold value S_thre. If an element exceeds the threshold, we label the corresponding subsequence of 𝐗 as anomalous and segregate it from the remaining sequence. The critical value is determined based on the p-value in the detection and here we employ S_thre = 1.5, which corresponds to the p-value of 0.006334%. This method can be easily extended to multi-qubit computations by computing the time series of S_exp for each qubit individually. We performed a Bell state measurement to demonstrate the proposed QEM as illustrated in Fig. <ref>. We obtain two binary sequences from two qubits and calculated the time series of S_exp from the two sequences individually. For each time window with size N, we calculate S_1 and S_2 from the two binary subsequences by Eq. (<ref>). If either S_1 or S_2 exceeds the threshold value S_thre = 1.5, the corresponding two binary subsequences are labeled as anomalous and labeled as normal otherwise. The time series of *Z_1Z_2 is depicted in Fig. <ref>(a), where *Z_1Z_2 denotes the expectation value of the observable Z_1Z_2, and it is calculated from the two binary sequences with the same window. *Z_1Z_2 should be 1 in the absence of errors. The red colored region represents the time periods labeled as anomalous based on S_exp and the blue represents the normal state. *Z_1Z_2 exhibits a great decrease to around 0.85 in the anomalous time period [the red band in Fig. <ref>(a)], while it shows little fluctuation around 0.97 in the normal time periods. We obtain two histograms from the normal and anomalous outcomes as depicted in Fig. <ref>(b). The probabilities of measuring the four states, |00⟩,|01⟩, |10⟩, and |11⟩, are visualized by the black bars in Fig. <ref>(b). The top panel shows the probability distribution calculated from the data classified as the normal state [colored blue in Fig. <ref>(a)], while the one at the bottom depicts that from the anomalous state (colored red). The probability distribution of the anomalous state exhibits a prominent peak in the |10⟩ state. We compare the values of 1 - *Z_1Z_2 obtained from the two categorized data as shown in Fig. <ref>(c). This means that our method successfully removes the abnormal data and improves the fidelity in estimating the expectation value of a physical observable. We then benchmarked the proposed protocol in a quantum volume circuit <cit.> as an example of sampler tasks, in which we measure the probability distributions of the final quantum states. The result is shown in Fig. <ref>. The circuit comprises three qubits and the qubits are measured after three layers of operation as shown in Fig. <ref>(a). Each layer is characterized by sampling a random permutation and then applying a random unitary transformation to the first two qubits. We compute the time series of S_exp for the three qubits and classify the outcomes into the anomalous and normal state data as illustrated in Fig. <ref>(b). The blue regions represent the outcomes classified as normal, while the red corresponds to the anomalous. We obtain two probability distributions from the two categorized experimental data and compare them with the ideal distribution (the black bars) as depicted in Fig. <ref>(c). The distribution derived from the normal data is overall closer to the ideal distribution, demonstrating a 5.5% improvement in the Hellinger fidelity<cit.>. We note that in our setup the circuit outcomes have been recorded for a sufficiently long time to investigate the time variation of S_exp. However, our mitigation technique can be applied at a moderate sampling overhead of tens of thousands shots, which is readily available with IBM Quantum processors. Finally, we perform a theoretical analysis on the probability distribution of S_theo introduced in Eq. (<ref>). Note that the i-th measurement outcome X_i is given by a random variable following the Bernoulli distribution B(1, p_i), where p_i is the probability of measuring the excited state in the i-th measurement. Here we make two fundamental assumptions, namely, p_i is a constant P_1, and {X_i}_i independently obey the identical Bernoulli distribution. Under these assumptions, it analytically follows that the random variables nY_j = ∑_i = nj + 1^(n + 1)jX_i independently obey the binomial distribution B(n, P_1) and the variance of {Y_j}_j is given by P_1(1 - P_1)/n. Since n is sufficiently large (in the experiments n = 128), we can apply the central limit theorem and approximate the probability distribution of {Y_j}_j with a Gaussian distribution. Then we express S_theo in Eq. (<ref>) in terms of new random variables {Z_j}_j defined by Z_j = Y_j - P_1/√(P_1(1 - P_1)/n), which independently obey the standard normal distribution 𝒩(0, 1), where 𝒩(μ, σ^2) denotes a Gaussian distribution with the mean μ and the variance σ^2. The expression of S_theo is given by S_theo = 1/m-1∑_j = 1^m (Z_j-Z)^2/( Z/√(n) + √(P_1/1 - P_1))( -Z/√(n) + √(1 - P_1/P_1)), where Z = 1/m∑_j = 1^m Z_j∼𝒩( 0, 1/m). Z/√(n) takes values of order 1/√(nm) with high probability, and thus, when 1/√(nm) is much smaller than √(P_1/1 - P_1) and √(1 - P_1/P_1), Z/√(n) is negligible compared to √(1 -P_1/P_1) and √(P_1/1 - P_1) with a high likelihood. As a result, Eq. (<ref>) reduces to S_theo≈S̃≡1/m-1∑_j = 1^m(Z_j-Z)^2 ∑_j = 1^m(Z_j - Z)^2 obeys the chi-squared distribution with (m - 1) degrees of freedom<cit.> and therefore the statistical characteristic of S̃ is analytically derived. In particular, the mean of S̃ is μ = 1 and the variance is σ^2 = 2/m - 1, which is independent of P_1. This fact suggests that we can use the same threshold for anomaly detection in practical quantum computation where P_1 (or the measured quantum state) is unknown. The condition √(%s/%s)1 - P_1P_1, √(%s/%s)P_11-P_1≫1/√(nm) is satisfied in most of our experiments since we use n = m = 128, and the inequality 0.01≤ P_1≤ 0.99 holds due to the 1% readout assignment errors. We then performed a Monte-Carlo simulation to support the validity of the discussions above, and the result is illustrated in Fig. <ref>. We numerically prepared 100,000 samples of S_theo for each of P_1 values we chose and compared the distributions of S_theo with those of S̃. The sample means of S_theo for several P_1 values (the blue dots) and the expectation value of S̃ (*S̃ = 1) (the red line) are depicted in Fig. <ref>(a), while Fig. <ref>(b) compares the variance of S_theo and S̃. The result provides a close similarity between the numerical and theoretical analysis for all the P_1 values. The probability density functions generated from the Monte-Carlo simulation are presented with the blue histograms in Fig. <ref>(c) for several P_1 values. The red lines show the functions calculated theoretically, showing a good agreement with the numerical histograms. In conclusion, we have investigated a temporal change in fluctuations in superconducting qubits by developing a statistic that quantifies the qubit stability. The measured temporal change is closely related to a temporal increase in errors, and we have demonstrated QEM by analyzing the correlation of the fluctuation. Furthermore, we have conducted an analytical study on the QEM method, and performed a numerical simulation to verify the result. This work was supported by CREST (Nos. JPMJCR20C1, JPMJCR20T2) from JST, Japan; Grant-in-Aid for Scientific Research (S) (No. JP19H05600), Grant-in-Aid for Transformative Research Areas (No. JP22H05114) from JSPS KAKENHI, Japan. This work is partly supported by IBM-Utokyo lab. § DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request. § AUTHOR DECLARATIONS §.§ Conflict of Interest The authors have no conflicts to disclose. §.§ Author Contributions Y. Hirasaki: Conceptualization (equal); Formal analysis (lead); Investigation (lead); Methodology (lead); Software(lead); Validation (equal); Writing – original draft (lead). S. Daimon: Conceptualization (lead); Funding acquisition (equal); Investigation (supporting); Methodology (supporting); Project administration (lead); Software(equal); Supervision (supporting); Validation (equal); Writing – review & editing (supporting). T. Itoko: Methodology (supporting); Validation (supporting); Writing – review & editing (supporting). N. Kanazawa: Project administration (supporting); Software(supporting); Supervision (supporting); Writing – review & editing (supporting). E. Saitoh: Funding acquisition (lead); Project administration (equal); Supervision (lead); Validation (equal); Writing – review & editing (lead).
http://arxiv.org/abs/2307.05128v1
20230711091016
One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations
[ "Kevin Hernandez-Diaz", "Fernando Alonso-Fernandez", "Josef Bigun" ]
cs.CV
[ "cs.CV" ]
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. 10.1109/ACCESS.2017.DOI [1]Halmstad University, Halmstad, CO 30261 Sweden Hernandez-Diaz : One-Shot Learning for Periocular Recognition Hernandez-Diaz : One-Shot Learning for Periocular Recognition Corresponding author: Kevin Hernandez-Diaz (e-mail: [email protected]). One weakness of machine-learning algorithms is the need to train the models for a new task. This presents a specific challenge for biometric recognition due to the dynamic nature of databases and, in some instances, the reliance on subject collaboration for data collection. In this paper, we investigate the behavior of deep representations in widely used CNN models under extreme data scarcity for One-Shot periocular recognition, a biometric recognition task. We analyze the outputs of CNN layers as identity-representing feature vectors. We examine the impact of Domain Adaptation on the network layers' output for unseen data and evaluate the method's robustness concerning data normalization and generalization of the best-performing layer. We improved state-of-the-art results that made use of networks trained with biometric datasets with millions of images and fine-tuned for the target periocular dataset by utilizing out-of-the-box CNNs trained for the ImageNet Recognition Challenge and standard computer vision algorithms. For example, for the Cross-Eyed dataset, we could reduce the EER by 67% and 79% (from 1.70% and 3.41% to 0.56% and 0.71%) in the Close-World and Open-World protocols, respectively, for the periocular case. We also demonstrate that traditional algorithms like SIFT can outperform CNNs in situations with limited data or scenarios where the network has not been trained with the test classes like the Open-World mode. SIFT alone was able to reduce the EER by 64% and 71.6% (from 1.7% and 3.41% to 0.6% and 0.97%) for Cross-Eyed in the Close-World and Open-World protocols, respectively, and a reduction of 4.6% (from 3.94% to 3.76%) in the PolyU database for the Open-World and single biometric case. Biometrics, Deep Representation, Periocular, Transfer Learning, One-Shot Learning =-15pt One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations Kevin Hernandez-Diaz, Student Member, IEEE, Fernando Alonso-Fernandez, Member, IEEE and Josef Bigun, Fellow, IEEE August 12, 2023 ============================================================================================================================= § INTRODUCTION Convolutional Neural Networks (CNNs) have become the standard increasingly in applications of Computer Vision and Pattern Recognition. From object detection <cit.> <cit.> to object recognition <cit.>, data generation <cit.>, image manipulation <cit.>, CNNs dominate the state-of-the-art. The popularity and success of CNNs largely stem from their ability to learn and extract highly discriminative features, as well as to easily adapt to different applications such as medical data <cit.>, autonomous driving <cit.>, or, in our case, biometric recognition <cit.>. Nonetheless, to achieve good results, CNNs usually require a substantial amount of varied data to allow the network to learn the abstraction of objects <cit.>. Since acquiring such data is often expensive and infeasible, many researchers are working to make CNNs more efficient <cit.>. Transfer Learning is one of the most common approaches to tackling data scarcity. It aims to adapt a network trained for a usually more complex task for which much more training data exist to a new target domain. The idea is to take advantage of the feature extraction power of the pre-trained CNN and fine-tune it for the specific task under consideration. One-shot learning is an extreme case of Transfer Learning, where no data is available to train the network for the new target. Instead, a vector of embedding, or deep representation, is extracted from a class-sample image using a pre-trained network for comparison. Then, distance or similarity-based metrics between the deep representations are used to determine if a new image belongs to the same class. Typically, the last layer before the classification stage is used to extract such deep representations. However, as this paper and previous preliminary studies on periocular recognition show <cit.> <cit.>, selecting the final layer of the network may not always be the best option. Moreover, as we also study here, the best layer selection depends heavily on the input data normalization as well as the amount and variety of data available when training the model. With the need for One-Shot Learning appeared newer approaches like the use of Contrastive-Loss <cit.> and Triplet-Loss <cit.>. In these types of losses, the network is optimized to extract a vector of embedding that maximizes the inter-class distance and, in the case of Triplet-Loss, also minimizes the intra-class distance up to some margin. The approaches for One-Shot Learning explained in this section can, once the network has been trained on a large dataset, be used directly on other datasets for recognition. For example, to use a VGG-Face <cit.> network directly on a target Face dataset. The eye region is one of the most discriminative areas of the face <cit.> <cit.>. However, it was not until 2009 when <cit.> first introduced the concept of periocular recognition. They described this new biometric as using the facial area in the immediate vicinity of the eye to recognize a person's identity. Besides the iris, a well-established biometric trait <cit.>, the eye's shape, texture, and subcomponents, like eyebrows, eyelids, commissures, or skin, provide much information that one can exploit to recognize a person. This periocular area has proven to achieve high recognition performance <cit.>, not only for identities but also for soft-biometric traits like gender, ethnicity, and age <cit.><cit.><cit.><cit.><cit.>, while having fewer acquisition constraints than other ocular modalities like the iris. However, despite its potential, large periocular datasets are scarce <cit.>, leading to limited research in this area. Nonetheless, due to the recent Covid-19 pandemic and the widespread use of face masks, this region has gained significant attention within the biometric community <cit.>. In a previous contribution <cit.>, we evaluated a selection of CNNs for One-shot periocular recognition on the UBIPr database. Later in <cit.>, we also analyzed the utility of a pre-trained CNN for few-shot cross-spectral periocular recognition on the IMP database. Here, we evaluate a wider selection of networks and databases. We also analyze the effect of other factors, such as image pre-processing, domain adaptation, and data partition. Our contributions are shown below: * State-of-the-art (SOA) periocular recognition comparison using One-Shot Learning. We report the performance per layer of six widely used CNN architectures (ResNet101v2, DenseNet121, VGG19, Inceptionv3, MobileNetv2, and Xception), as well as the most widely used hand-crafted features in periocular recognition (LBPH, HOG, SIFT) for three different datasets (IMP, PolyU, Cross-Eyed). * We investigate the effect that Domain Adaptation had on the selected networks' deep representation when we used CNNs trained for the periocular modality for Cross-Dataset recognition. We consider the following scenarios: CNNs pretrained with the ImageNet dataset, ImageNet CNNs fine-tuned for periocular recognition with auxiliary datasets, randomly initialized CNNs, and finally, randomly initialized CNNs trained for periocular recognition with auxiliary datasets. * We examined how the acquisition method and input image preprocessing can affect the performance of Deep Representations and best-performing layers. * Finally, we also report the generalizability of the best layer found by showcasing how performance varies when using the best-layer information between datasets and same-dataset partitions on the Open-World (OW) and Close-World (CW) cases. The rest of the paper is organized as follows. Section <ref> frames our work within the related research area of periocular biometrics. Section <ref> presents the databases and networks used, training strategy, and general methodology. Section <ref> explains the paper's experimental framework. Section <ref> shows the results and compares them with the state-of-the-art and related databases used. Finally, in Section <ref>, we present the final conclusions obtained from our research. [ht!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9]databases.png Image samples from each database. a) Original database samples, UBIPr shows the relative difference size between image samples b) normalized images after pre-processing as explained in <ref>. Cross-Eyed c) and d) show the normalization difference used in Section <ref>. In c), the only modifications to the images were the conversion to grayscale and cropped to be squared, while d) shows the full normalization effect as explained in <ref>. § RELATED WORK This section surveys Deep Learning biometric recognition, focusing particularly on periocular biometrics and One-Shot Learning. This paper extends two previous works <cit.> <cit.> that dealt with deep representations for periocular recognition. In <cit.>, we compared the performance per layer of deep representations from a selection of four well-known architectures pre-trained on ImageNet or face recognition databases (AlexNet, GoogLeNet/Inception v1, ResNet, and VGG) and the results with traditional computer vision hand-crafted features. The data employed consisted of periocular images in the visible range from the UBIPr database. This paper discovered that intermediate CNN representations of such networks could outperform traditional CV methods used in biometric recognition with no additional training needed. Furthermore, we saw that biometric-trained models like VGG-Face did not perform better than their general-purpose counterparts trained for the ImageNet challenge. In <cit.>, we extended the work for cross-spectral periocular recognition using the IMP database, which contains images with three different types of illumination: visible, near-infrared, and night vision. We first analyzed the changes in the performance per layer, observing that the optimal layer is different for each spectrum. We later investigated how intermediate representations could be used for cross-spectral purposes. We found that cross-spectral performance could be improved by training a fully-connected network at the end of the best-performing layer of each spectrum, thus demanding a small fine-tuning step only. The cross-spectral performance was observed to improve largely, up to 65% (EER) and 87% (accuracy at 1% FAR) wrt previous papers, constituting the best-published results to date on the IMP database. The work <cit.> focused on one CNN only (ResNet), while the present paper extends the study of the cross-spectral issue to a selection of six different architectures. The mentioned periocular research of <cit.> <cit.> is inspired by <cit.>, where the authors studied the per-layer performance in iris recognition of five different ImageNet pre-trained CNNs (AlexNet, VGG, Inception, ResNet, and DenseNet). The segmented and normalized iris image is given to the CNN. The intermediate layers' output at different depths is then extracted and used as feature representations to feed an SVM for identity classification. The paper achieved state-of-the-art recognition performance on two large iris databases, LG2200 (ND-CrossSensor-Iris-2013) and CASIA-Iris-Thousand. The authors concluded that the employed Off-The-Shelf CNNs can extract rich features from iris images that could be used for recognition, thus reducing the complexity of using CNNs for the task by not having to train them, opening the door to new iris representations. In our previous papers <cit.> <cit.>, and in the present contribution, we also follow this direction for the periocular modality. CNNs pretrained on large image datasets such as ImageNet, MS1M, and VGG-Face have been widely used as the backbone of many architectures in the literature. In <cit.>, the authors use a frozen VGG16 trained on ImageNet with its Fully Connected layers discarded as the backbone architecture to extract periocular features later used for person recognition, for soft biometric classification, and both together in a Joint Periocular Recognition Block. They improved the SOA for periocular recognition on both UBIRISV2 and FRGC datasets, as well as the soft-biometric classification on FRGC. In <cit.>, the authors used a VGG16 trained for face recognition using the VGG-Face dataset, a dataset with 2,6M images and 2,622 identities, and fine-tuned the network for periocular recognition while controlling the size of the final feature vector. Once the network was adapted to the new domain, the last layers were removed, and the recognition was made by comparing the deep representations of the test images using the Euclidean distance, Spearman distance, or Cosine similarity. They demonstrated the feasibility of using the periocular area in unconstrained scenarios by achieving SOA on NICE.II and MobBIO, two datasets with images captured in uncontrolled environments in the visible spectrum. In another study <cit.>, the authors used a similar approach to analyze the effect that iris normalization and segmentation have on Deep Representations for biometric recognition. They used two networks (VGG and ResNet50) trained for face recognition and fine-tuned them for iris recognition by removing the last layer and incorporating two new fully connected layers. Once the training was complete, they removed the last classification layer and used the Cosine similarity between the deep representations for biometric verification, reaching a new SOA for the NICE.II dataset. The authors of <cit.> used a One-Shot learning approach to extract a vector of embeddings for joint biometric and sensor recognition. They extracted the images' deep representations using an embedding network that was trained using one of three different types of losses: Cross-Entropy, Contrastive (single and double margin), and Triplet-Loss (with off-line and online triplet mining, as well as multi-class negative-pairs). They then extracted the vector of embedding from the final layer of the network (removing the classification layer from the Cross-entropy approach) and use it for recognition. They compared their results in three different biometric modalities: face, periocular, and iris, as well as for two different types of sensors: Near-Infrared iris sensors and smartphone cameras. They found that the representations were robust across the three biometric modalities and different sensors, outperforming SOA commercial approaches. In the paper <cit.>, authors proposed a method that consists of a periocular ROI detection model for image alignment, custom data augmentation, and illumination normalization to extract robust and generalizable periocular features using a MobileNetv2 network. They followed an Open-Set protocol in which they trained their models using the VISOB database on visible images and then evaluated the generalizability of their model on UBIRIS-V2, UBIPR, FERET, Cross-Eyed, CASIA-IRIS-TWINS, which includes adverse imaging environment and cross-spectral comparisons, by matching the features vectors extracted using Cosine similarity. They reduced the error rate up to 7 times when compared with existing models in the literature. In <cit.>, authors proposed to use an unsupervised convolutional auto-encoder to create subject-invariant feature representations for ocular recognition. For each image input, two augmented views were created and fed to the network. They used an L1 norm between the Deep Representations of both image views created by the encoder and between the original and reconstructed images by the decoder. They coupled the loss also with a KL-divergence term as a sparsity regularizer and two coefficients to weigh the contribution of the regularizer and Deep Representations to the loss. They followed an Open-Set cross-dataset evaluation protocol where they used the Cosine similarity or Hamming distance for matching. They achieved a 2.2% lower EER for cross-illumination conditions when compared to a supervised ResNet50. [h!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9]data_partitions.png Different database partitions for the Close World (CW), Open World (OW), and Complete protocol. § DATABASES, METRICS, AND PROTOCOL This section describes the databases, the matching protocol, and the metrics used to compare the results from our experiments and the baselines. §.§ Databases We employed images in the visible (VIS) range from four commonly used periocular datasets in the experimentation: IIITD Multispectral Periocular (IMP) <cit.>, UBIPr <cit.>, Cross-Eyed <cit.><cit.> and PolyU <cit.>. UBIPr is a periocular database captured with a CANON EOS 5D digital camera with different degrees of subject-camera distance (4-8m), resolutions, illumination, poses, and occlusions in two different sessions. To match the same type of images than the other databases, we only kept the frontal images. In addition, we retained only the users that had two recorded sessions. Since both eyes are available per user per session, our final database has 86 individuals ×2 sessions ×2 eyes ×5 distances = 1720 images. Each eye is considered a different identity, thus having 172 identities. Furthermore, we resized the images using bicubic interpolations. We normalized them (with the annotated ground-truth used in <cit.>) to have the same average sclera radius in their distance group and aligned them by extracting a square region of 7.6R_s x 7.6R_s around the sclera center. IMP is a cross-spectral periocular database. It offers images captured in three spectra: Near-Infrared (NIR), Visible (VIS), and Night Vision. The VIS images were captured using a Nikon SLR camera from a distance of 1.3m in a controlled environment and illumination. The database has 62 users with 5 images per user and per spectrum containing both eye regions. We manually annotated the sclera center of each eye and the sclera radius. Then, we separated each eye and normalized the images to have the same sclera radius, and aligned them by cropping a squared region around its sclera center. The database thus has 62 users ×2 eyes ×5 images per eye = 620 VIS images. Cross-Eyed is a cross-spectral periocular database captured for the 1^st Cross-Spectral Iris/Periocular Competition <cit.>. The database was collected using a custom dual-spectrum image sensor that simultaneously captured images in both NIR and VIS at a distance of 1.5m in an uncontrolled indoor environment. It comprises images of periocular and iris regions of 120 subjects from different nationalities, ethnicities, and eye colors. There are 120 subjects ×8 images ×2 eyes = 1920 images per spectra and modality. In this paper, we make use of VIS periocular images. Periocular images in Cross-Eyed have their iris masked to ensure pure periocular recognition. We used these masks to normalize them to have the same sclera radius, center, and orientation. They were also zero-padded and cropped, so all have the same size. PolyU is an iris image database captured using simultaneous bi-spectral imaging. It offers iris images in NIR and VIS, where each eye has pixel correspondence between both spectrum versions. It has 209 subjects ×15 images ×2 eyes = 6270 images per spectrum. As with the previous datasets, we only used the VIS images for this paper. Since the periocular region in this dataset is rather limited, images are just resized to be squared. All images were converted to grayscale to normalize skin color across databases, padded with zeroes when images were not squared, resized using bicubic interpolation, and copied across the RGB channels to fit Imagenet networks' input size. Figure <ref> shows examples of images from the different databases after this procedure. §.§ Metrics The Equal Error Rate (EER) is the most common evaluation metric for biometric verification systems. EER refers to the error at the intersection point between the False Acceptance Rate (FAR) and the False Rejection Rate (FRR) curves. To compare two feature vectors v1 and v2, we used the cosine similarity illustrated in Equation <ref> for its fast calculation, even for very high dimensionality vectors, to calculate the FAR and FRR from the CNN embeddings, as well as with the LBPH and HOG descriptors. With SIFT features, we used Equation <ref>, defined as the ratio of matches (M) between images over the minimum number of keypoints (K) detected in either image a and b, with epsilon being a control parameter for any case when no keypoints where found in an image. cosine = v_1 * v_2/‖ v_1 ‖‖ v_2 ‖ ratio_sift = M/min (K_a,K_b,ϵ) §.§ Protocols [hbt!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9]feature_extraction.png Example of a middle layer's Deep Representation extraction for VGG19. Although biometric identification was used for training the networks for periocular recognition, a verification setting was the choice for analyzing the performance of the proposed method. In biometric verification, one compares an input image against an image of the identity the user claims to be. If the similarity between images is above a predefined threshold, the user is considered genuine; otherwise, the user is considered an impostor. All tests employed a cross-dataset One-Shot Learning approach. If a network is to be trained, we used a dataset formed from the combination of all databases introduced in <ref> except the one used for testing. For instance, when calculating the EER for the Cross-Eyed, a combined dataset with all data from UBIPr, IMP, and PolyU was used to form the training set. Subsequently, for calculating the test performance, we followed an all-against-all strategy, computing all pairs of genuine and impostor scores. All images in the test dataset were used to analyze the performance per layer of the networks. However, we also follow the same protocols as in other previous papers employing the same databases to enable comparison. In particular, for comparison for the PolyU and Cross-Eyed, we use the same approaches carried on <cit.>: the closed-world (CW) protocol, where the images from each user are split into training and testing; and the open-world (OW) protocol, where the users are split into training and test, along with all their available images in such a way that there are no images from the same user in training and test simultaneously. Figure <ref> reflects the difference between the partitions in CW and OW. In the PolyU dataset and the Close-World setup, the "Test" partition contains the last five images of each user while the remaining ten images are included in the "Train" partition. In the Open-World approach, the subjects are divided into two halves of 209 users each, the first half used for the "Train" partition and the latter half of the subjects for the "Test" partition. Regarding the Cross-Eyed database, the CW "Test" partition includes the last three images of each user, and the remaining five images go to the "Train" partition. For the OW protocol, the users are divided into two halves here as well, with the first 120 users for "Train" and the last 120 users for "Test". To summarize the number of classes, images, and comparisons for each partition, refer to Table <ref>. In the case of the IMP database, due to limited data, we only make all-against-all comparisons to calculate the performance on the complete dataset. Finally, the UBIPr dataset is only used in training due to the experiments with different networks already done in <cit.>. § METHODOLOGY This section presents the experimentation setup used for this study. In particular, the networks, libraries, training strategies, and other algorithms used and how the data was handled and compared. This paper investigates the performance of deep representations in the middle layers of convolutional neural networks (CNNs) for periocular recognition. We also focus on the impact of training and Transfer Learning on performance. We utilized six widely used and readily available CNNs: ResNet101v2 <cit.>, DenseNet121 <cit.>, VGG19 <cit.>, Xception <cit.>, Inceptionv3 <cit.>, and MobileNetv2 <cit.>. We conducted periocular verification on the VIS images of the IMP, Cross-Eyed, and PolyU datasets presented in the previous section. We assessed how the performance per layer of each network varied as a One-Shot verification algorithm on a target dataset. To do so, four cases were considered: i) networks trained with the ImageNet dataset; ii) ImageNet networks fine-tuned for periocular recognition; iii) random initialized networks; iv) networks trained for periocular recognition from scratch In cases ii), iv), where the network requires training for periocular recognition, we do so by training it for biometric identification. The training set is composed of all the available periocular datasets except the one used for testing, as indicated in the previous section. When the target (test) dataset was IMP, we combined UBIPr, Cross-Eyed, and PolyU to form a dataset with 830 classes and 9,909 images. We then split it into training and validation sets. The validation set included the last image-distance of each session and user from the UBIPr dataset, the last two images of the Cross-Eyed dataset for each user, and the last five images of each user from PolyU, resulting in training and validation partitions of 6,996 and 2,913 images, respectively. When the target dataset was Cross-Eyed, we trained the networks on a dataset combining UBIPr, IMP, and PolyU, which comprised 8,609 images from 714 classes. The training and validation split followed the same strategy as IMP for UBPIr and PolyU; for the IMP dataset, we used only the last image of each eye and user for the validation split, resulting in training and validation sets of 6,052 and 2,557 images, respectively. We used Tensorflow-Keras to download, initialize, train, and test the networks. We retained the network's main body, altering only the final Dense layer to fit the number of training classes. We trained them using the Adam optimizer with a learning rate of 0.003, except for VGG, for which we employed Stochastic Gradient Descent with a learning rate of 0.001 and a ClipValue of 0.5, as it provided better stability during training. We trained the networks with Early Stopping, monitoring the validation loss with a patience of 20 epochs and a maximum limit of 500, saving and restoring the weights of the best-performing epoch. Due to GPU memory constraints, the batch sizes were either 16 or 32, depending on the network. We performed data augmentation by randomly rotating the images up to 30 degrees, shifting the height and width by up to 20 percent, and zooming by up to 20 percent. All training was conducted on a Windows 10 machine with 64GB of RAM and an Nvidia RTX2070 GPU with 8GB of VRAM. After preparing all the networks, we extracted the output of the network layers as illustrated in Figure <ref>. We sliced the network from the input to the desired layer, inputted all the images from the target dataset, extracted the layer's output 4D matrix, and flattened the matrix while maintaining the batch dimension. Subsequently, we compared the entire dataset using an all-against-all matching strategy, employing Cosine similarity for its rapid and straightforward computation before proceeding to the next layer, as described in Section <ref>. We also use in our experiments three methods based on the most widely used features in periocular research, employed as baseline in many studies <cit.>: Histogram of Oriented Gradients (HOG) <cit.>, Local Binary Patterns (LBPH) <cit.>, and Scale-Invariant Feature Transform (SIFT) key-points <cit.>. HOG and LBPH features are extracted from non-overlapped regions of the image, forming per-block histograms of 8 bins which are then concatenated to form a feature vector of the entire image. Comparison between two images is done via Cosine similarity between their histograms. On the other hand, SIFT operates by extracting key-points (with dimension 128 per key-point) from the entire image. The comparison metric between two images is as explained in <ref>. For LBPH and HOG extraction, we used the native Matlab implementation, while for SIFT, we employed the Matlab version available here[https://www.vlfeat.org/overview/sift.html]. § RESULTS AND DISCUSSION This section presents the results obtained from the experiments for the different networks, databases, and modalities. We started by analyzing the performance of middle-layers representations of well-known networks for periocular verification trained for the ImageNet dataset. These pre-trained networks have become the standard starting point for most image classification tasks <cit.>. Once we obtained the reference results, we explored how they compare when the networks are trained for the same type of data as the target domain, both in EER and depth of the best layer. The results of this study are reported in Section <ref>. Since we are comparing very high dimensional data using simple similarity scores, we also investigate the impact that alignment and preprocessing on input images can have for this type of Transfer Learning strategy. This is done in Section <ref>. In the mentioned two sections, we have utilized entire datasets to compare the performance of the methods. To assess how the employed strategies generalize, we examine in Section <ref> the consistency of the method in terms of layer depth and performance when changing between training and test partitions on the same dataset as well as at the best layer found on other datasets. Finally, in Section <ref>, we compare our results with previous CNN-based works employing the same datasets, as well as with traditional handcrafted features. §.§ Training effect [h!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9]IMP_group_all.png EER per layer for the IMP dataset for the cases when: i) the network is trained with the ImageNet dataset (red curve), ii) the network is fine-tuned from ImageNet model for periocular recognition (blue), and iv) the network is trained for periocular recognition from scratch (green curve). These training strategies i), ii), iv) are detailed in Section <ref>. The best cases per network and per training strategy are given in Table <ref>. Results are shown for the Complete protocol defined in Table <ref>. [h!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9]Cross-Eyed_group_all.png EER per layer for the Cross-Eyed dataset for the cases when trained with the ImageNet dataset, when: i) the network is trained with the ImageNet dataset (red curve), ii) the network is fine-tuned from ImageNet model for periocular recognition (blue), and iv) the network is trained for periocular recognition from scratch (green curve). These training strategies i), ii), iv) are detailed in Section <ref>. The best cases per network and per training strategy are given in Table <ref>. Results are shown for the Complete protocol defined in Table <ref> [h!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9]random_group.png EER per layer when the networks were not trained and are just randomly initialized. The best cases per network are given in Table <ref> (IMP database) and Table <ref> (Cross-Eyed database). Results are shown for the Complete protocol defined in Table <ref>. Tables <ref> and <ref>, along with Figures <ref>, <ref> and <ref> show the effect that training has on the deep representation of the networks for periocular verification. using IMP and Cross-Eyed as test databases. The figures show the performance of the different CNN layers per network and per training strategy, while the tables summarize the best performance and in which layer it is obtained. Results in this subsection make use of the "Complete" partition of the databases (Table <ref>). As the tables show, the best results are not necessarily obtained with fine-tuned networks (cases ii, iv). For some networks, it is better to use ImageNet weights directly (case i), or even random weights (case iii), as with MobiletNet.. This is consistent with previous findings in <cit.>, in which the VGG-Face network, a VGG network trained for face recognition on a dataset with 1 million images, achieved worse periocular recognition results than its ImageNet counterpart. In absolute numbers, ResNet for the IMP dataset and InceptionV3 for Cross-Eyed are the networks that obtained the best results (EER of 2.05 and 0.7, respectively). ResNet performs the best in the IMP dataset for both the ImageNet (case i in Table <ref>), and the periocular-trained network (case iv). It also ranks second best for the IMP dataset for the fine-tuned ImageNet network (case ii, Table <ref>) and the Cross-Eyed dataset ImageNet and fine-tuned ImageNet (cases i and ii in Table <ref>), albeit sharing the position in this last case with DenseNet. Conversely, InceptionV3 performs the best for these three categories in Cross-Eyed. Overall, it is thus unclear what training strategy is optimal since the best EER obtained for each network and dataset varies. For ResNet and InceptionV3, it seems better to use the ImageNet version than any other training. On the other hand, it seems better to fine-tune for DenseNet, VGG, Xception, and MobileNetV2. Interestingly, when the networks are fine-tuned, starting from ImageNet weights ("TL ImageNet" column in the tables or blue curve in the figures) gives better results than starting from scratch consistently ("Trained" column or green curve). This effect is much more prominent in the Cross-Eyed dataset. This corroborates previous works that suggest employing a general purpose training such as ImageNet as starting point, especially if data available for fine-tuning is limited <cit.>. Upon examining Figures <ref> and <ref>, we can see that at deeper layers, the fine-tuned networks initialized with ImageNet weights (blue curves) start to perform better than the ImageNet counterpart (red curve). This may be due to the similarity in the domain and higher abstraction at deeper layers achieved by fine-tuning, which helps to close the gap between datasets. However, fine-tuned networks started from scratch (green curves) are, in some cases, even worse than ImageNet networks, especially with the Cross-Eyed database. Again, This confirms that fine-tuning ImageNet networks is a better starting point than scratch initialization, especially with limited training data. On the other hand, at early layers, ImageNet and fine-tuned networks perform very similarly in many cases. This confirms the general assumption that early layers of CNNs usually extract low-level features that are domain-agnostic in many cases, while deeper layers become more specialized for the particular task at hand. Another very relevant result is that the very last layers of the networks always suffer a jump in error, even fine-tuned versions. Moreover, with IMP, many cases show performance degradation even earlier. Indeed, optimal performance with any network or database is attained already at the middle layers or just after the first third of the network. We examined the performance per layer of randomly initialized networks as a control (Figure <ref>). However, the results are surprisingly good in some cases. As Tables <ref>, <ref> show, the performance of networks with random weights is not as bad as it could be expected, equating to or even beating other cases involving training. The performance per layer, as seen in Figure <ref>, shows a very stable behavior after some initial variability. This is somewhat expected since the weights are random, so the extracted features are also. Most networks have no clear positive or negative tendency as we increase depth, but they show a relative plateau in performance, especially after a relative depth of 0.2. DenseNet, however, does exhibit a slight performance improvement the deeper the layer is, but this is in the form of steps. Finally, we can see that MobileNet has a peak performance at the very last layer. Indeed, MobileNetv2 with random weights achieves the second-best performance for both datasets. This represents an outlier, but it shows that the exponential behavior of the last layer can also work to one's advantage. Nonetheless, as mentioned above, some randomly initialized networks perform relatively similarly to their trained counterparts. DenseNet exemplifies this for the IMP dataset and InceptionV3 and DenseNet for Cross-Eyed, where the difference in EER is less than 1%. A notable example is VGG, which performs better in the random version than its trained one for both datasets. Moreover, as we will analyze later when comparing to other methods (Section <ref>, Table <ref>), all randomly initialized networks yield results comparable to baseline CV algorithms like LBPH and HOG. §.§ Normalization effect [h!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9]prepronoprepro_group_all.png EER per layer for the Cross-Eyed dataset for the cases when the images were preprocessed and not. The networks employed are trained with ImageNet. The best cases per network are given in Table <ref>. We then examined the method's robustness for perturbations in the input data and how it affects the deep representation behavior. To do so, we employed normalized and unnormalized images of the Cross-Eyed database. The normalized version consisted of the images processed to have the same sclera radius, center, and orientation, as described in Section <ref>, whereas the unnormalized images are only converted to grayscale and cropped to be squared using the smallest dimension as reference size. Figure <ref> shows the effect of the normalization process on the data for the Cross-Eyed database. Although the Cross-Eyed database was captured at a constant distance, small differences in scale and position between different images can appear if the image is not normalized. Figure <ref> shows the effect per layer of input normalization on the performance of the deep representations. For space-saving purposes, we report results using networks trained for ImageNet only. We can see that the EER becomes significantly worse if images are unnormalized (blue curves), especially at the early and middle layers. Only in the final layers the performance with unnormalized images becomes closer to the normalized counterparts. This is understandable since networks are susceptible to scale, orientation, and to a minor degree, translation. Only the deeper layers can achieve a higher-level representation of the input data, contributing to closing the gap between the two cases. However, normalized data achieves the best absolute performance with most networks, as shown in Table <ref>. Except for VGG, all networks exhibit an increase in EER between 77% (MobileNetv2) and 439% (InceptionV3) with unnormalized data. We can also see that the best-performing layer with unnormalized data becomes close to the last layer, compared to the normalized version, which usually achieves the best performance in the first half of the network. Thus, robust data normalization is key to achieving better performance. §.§ Partition effect [h!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9]cross_partition_performance.png Verification accuracy for each partition (defined in Table <ref>) of Cross-Eyed (left) and PolyU (right). The lower the values, the better. The values are re-scaled, so the maximum EER per CNN and database of the CW/OW experiments are set to 1 (white). Black indicates 0% EER. The exact values are given in Tables <ref>, <ref>. Although one can utilize the network without training them for a specific domain, we have seen in previous subsections that it is still essential to determine which layer yields the best performance. As we demonstrate in Tables <ref> and <ref>, the best-performing layer can vary significantly from database to database and from network to network, even in the same periocular domain. In the present subsection, we go one step beyond and consider how the performance and the best layer can change considering different partitions of the database. In other words, we select the best layer in one data partition and test the performance in another partition. Furthermore, we check generalizability even further by looking how the performance changes using the best-performing layers from other datasets. For this subsection, we only retain ResNet, DenseNet, and Inception, the best-performing networks of previous subsections (Tables <ref>, <ref>, and <ref>). We report results using networks trained for ImageNet only for space-saving purposes. In addition, since only PolyU and Cross-Eyed have different partitions and CW/OW protocols (Table <ref>) <cit.>, this section focuses on these two databases. Table <ref> shows the EER obtained on Cross-Eyed on the different partitions of the Closed-World (CW), Open-World (OW), and Complete protocols. The "Train", "Test", and "Complete" partitions of the different protocols refer to those detailed in Table <ref>. Recall that the main difference between CW and OW protocols (Figure <ref>) is that the CW protocol contains the same users in the "Train" and "Test" splits (the images of each user are split into two sets), whereas the OW protocol contains different users in the "Train" and "Test" splits (the users are split into two sets). For better viewing, Figure <ref> (left) depicts the relative accuracy values (black=0%, white=maximum EER per CNN/database considering the CW and OW experiments together). From table <ref> and Figure <ref>, Inception offers very stable results with Cross-Eyed, at least when the best layer is selected on a partition of the same database. The best-performing layer for each partition and, when considering the whole dataset, are quite close together, as do the EER (see the relatively similar gray colors in Figure <ref>, left bottom). Inception even yields the same exact layer regardless of the partition and mode used to select it. Interestingly, this is true both in the CW case (the Train/Test partitions contain images from the same users) and the OW case (the Train/Test partitions have different users), meaning that Inception generalizes very well over the Cross-Eyed database, even to unseen users. On the other hand, when the best layer is selected externally (using the IMP database), the results degrade substantially, giving the worst EER. Indeed, this is true for any CNN or partition with the Cross-Eyed database (see the brightest boxes in Figure <ref>, left, which most of the times correspond to the case when the best layer is selected using IMP). In addition, the best layer with IMP is usually very different than the best layer calculated with other partitions. This suggests that, despite using databases in the VIS spectrum, their differences (in sensor, illumination, etc.) play a very important role in selecting the optimum layer. ResNet and Densenet, on the other hand, do not appear to generalize as well as Inception over Cross-Eyed. The best layer is different depending on the partition used to select it, at least in the CW case, where it can also be seen that the performance across partitions varies. This can be appreciated in the gray variations of Figure <ref>, left, for the CW case with these two networks. This result is interesting because the CW case contains images from the same users in all partitions, so one would expect similar optimum layers and performance. On the other hand, the OW case contains images from different users on each partition, but the preferred layers are closer, and the performance is more constant across partitions. One observation in this regard would be that the CW protocol entails more users but with fewer images per user in each partition, whereas the OW protocol includes fewer users per partition but with more images per user. This connects with previous research <cit.> <cit.> that shows that it is better to make training decisions based on a larger number of images per user, even if it implies fewer users. This is because a larger number of samples per user allows to model the intra-class diversity better. In our case, it translates to better generalizability when the best layer is selected under the OW protocol. As seen (Figure <ref>, left, OW case), the gray variations between partitions, in this case, are not so high compared to CW. Table <ref> and Figure <ref> shows the results for the PolyU dataset. In this case, we can see that the best layers for each network and partition are close to the network's end when they are selected on a partition of the same PolyU database. This is, as explained in Section <ref>, most likely due to the normalization of the PolyU dataset and its more inconsistent periocular region. Even though several papers use the PolyU dataset as a periocular one, the database was collected to be an iris database, so the surrounding ocular area is not consistent. The periocular area, orientation, location, and scale between images change much more than in Cross-Eyed. Also, the available periocular area is reduced, making periocular recognition with this database challenging. As a result, the layers with the best performance are towards the network's end when the networks have achieved a sufficient level of abstraction. If we use an external database instead, like IMP, which contains periocular images of better quality, the best layers appear earlier for all networks. While the lower abstraction of such layers may be sufficient for IMP, their performance on PolyU is substantially worse. Regarding the best layer of each network, it can be observed that it is approximately the same when it is selected using PolyU, no matter the partition or protocol used. This is relevant from a generalizability point of view. However, when it comes to the EER, the behavior of the Close-World and Open-World changes drastically, even if the layers at which they are calculated are the same. This can be attributed to the number of comparisons made in each mode combined with the worse quality of the PolyU images. In the Open-World case, since both partitions have the same number of users and the same number of images per user, the EER obtained for each one is very close. However, the number of images per user and partition varies in the Close-World mode, resulting in different genuine and impostor comparisons, as shown in Table <ref>. As the number of comparisons is smaller in the test partition, especially for the genuine case, the EER on this partition is systematically lower than the train partition. This is because a smaller amount of genuine scores do not allow to account sufficiently for intra-class variability effects, providing a more optimistic performance when the database is of lower quality. Lastly, it can also be seen with PolyU the negative effect of selecting the optimal layer with an external database. The worst EER (brightest boxes in Figure <ref>, right) happens when Cross-Eyed or IMP are used. In addition, there is no consistency per CNN. With ResNet, the worst result is given by the optimal layer in Cross-Eyed. However, with DenseNet and Inception, the worst EER comes from the optimum IMP layers. §.§ Comparison with SOA and others Table <ref> summarizes and compares the results with previous works using the same databases <cit.><cit.><cit.> and the LBP, HOG, and SIFT hand-crafted features. We surpass the state-of-the-art results for the Cross-Eyed dataset achieved by <cit.>. In their study, they train a ResNet50 and a VGG network for cross-spectral periocular recognition but also calculate the performance for same-spectrum verification. InceptionV3 reduced the EER by 58% and 79% for Cross-Eyed in Close-World and Open-World protocols, respectively, and it even achieves superior results than their fusion of iris and periocular. ResNet and DenseNet also reduced the EER w.r.t <cit.> despite lacking training on the target dataset. However, for PolyU, the situation is different. None of the networks achieved comparable or superior results than <cit.>. The increment in the EER for our method is probably due to the higher degree of variability in the PolyU data. At the same time, the bigger gap in the results for the PolyU and Cross-Eyed dataset achieved in <cit.> is partially due to the amount of data available to fine-tune the network, allowing training the CNN better for the task. We also outperformed our previous results on the IMP dataset <cit.>. Even if we also used a ResNet101, the version used in this study was a newer ResNet101V2 on Tensorflow-Keras, which can explain the difference. Regarding traditional computer vision algorithms, LBPH and HOG performed worst on each occasion. Nonetheless, SIFT managed to achieve similar results for all Cross-Eyed partitions. It also achieved comparative results for PolyU in the Close-World protocol but outperformed <cit.> in the Open-World case when the authors only utilized the periocular region. This can be due to the weakness of machine-learning methods when confronted with users that were not present in the training data, which can benefit non-trainable algorithms like SIFT. It must also be highlighted that PolyU has the worst image quality and highest variability among the databases employed. In this case, it becomes less evident the gap between hand-crafted and data-driven approaches when there is limited data. § CONCLUSIONS This study examined the effect that training and fine-tuning have on the behavior of CNN's deep representations for One-Shot learning. We utilized well-known pretrained networks as out-of-the-box feature extraction methods for periocular recognition. We investigated the behavior per layer of the networks for different datasets and under different training modes. Additionally, we examined the approach's robustness to some natural acquisition noise and how the best layer changes in relation to an auxiliary database or sampled data from the same distribution. There is no clear best option regarding training strategy. In our experiments, we have observed that it depends on the network used. ResNet, InceptionV3, and MobileNetV2 do better using the ImageNet weights, while the rest can benefit from fine-tuning to the target periocular task. As in previous works, ResNet typically yields one of the best performances among the networks, making it a good default option for this approach. It is worth mentioning that we outperformed CNNs specifically trained not only for the task of biometric recognition but also for the same dataset without having to fine-tune our models. Furthermore, non-CNN-based algorithms like SIFT can still outperform trained CNNs for the same dataset. Regarding robustness, a crucial factor seems to be the normalization of the input data. Since our method relies on simple similarity scores between high dimensional matrices, misalignment will heavily penalize the performance. Normalization also affects the depth at which the best-performing layer is situated, which tends to be close to the end for not normalized data, which is when the network has achieved a sufficient level of abstraction. When it comes to the sample set used to select the best layer, using an external database with different acquisition conditions has shown to have a very negative effect, giving much worse EER and a very different optimal layer w.r.t. using a partition of the same database. Also, it is essential to have a sufficient number of images per user to properly model intra-user variability. This is especially critical if the target dataset is of very low quality. A limitation of this approach is that it relies on finding the best layer for the task. To accomplish this, it is necessary to have a certain amount of data to calculate the network's performance for that specific domain properly. Nonetheless, the amount of data needed could be, in principle, smaller than the amount needed to properly train a network, as we can see in Table <ref>, where we achieve better performance for Cross-Eyed than a network trained for biometric recognition with millions of images and then fine-tuned for the dataset. However, the greater the available data, the better results the trained network will have. Another limitation of this approach is that the deep representation matrices of middle layers can be quite big, posing challenges for large datasets or embedded systems due to memory constraints. For these reasons, the normalization process required for this method makes it a suitable option only for small-scale, easy-to-normalize scenarios. Nevertheless, using facial landmarks and iris and sclera segmentation methods, the periocular region is relatively easy to normalize for frontal images. Regarding its advantages, our approach can use CNNs as out-of-the-box feature extractors with relatively good results. It also enables us to save resources on training, data collection, and processing power through network pruning, removing all other network parts that are not required to get to the best-performing layer. Our future work on this approach includes investigating methods to reduce the deep representations' dimensionality or memory used, as well as exploring its potential for network pruning in transfer learning. § ACKNOWLEDGMENT The authors thank the Swedish Research Council (VR) and the Swedish Innovation Agency (VINNOVA) for funding their research, as well as the National Supercomputer Center (NSC), funded by Linköping University, for providing the resources necessary for data handling and processing. IEEEtran [ < g r a p h i c s > ]Kevin Hernandez-Diaz received the B.S. in telecommunication engineering from Universidad de Sevilla, Spain, in 2016 and the M.S. in Data Science and computer engineering from Universidad de Granada, Spain, in 2017. He is currently pursuing the Ph.D. degree in Signals and Systems Engineering at Halmstad University, Sweden. His thesis focuses on biometric recognition from the ocular region in unconstraint sensing environments. His research interests include AI for biometrics, particularly face and ocular recognition, as well as signal and image analysis, processing, generation, and feature extraction using Deep Learning. [ < g r a p h i c s > ]Fernando Alonso-Fernandez is a docent and an Associate Professor at Halmstad University, Sweden. He received the M.S./Ph.D. degrees in telecommunications from Universidad Politecnica de Madrid, Spain, in 2003/2008. Since 2010, he is with Halmstad University, Sweden, first as recipient of a Marie Curie IEF and a Postdoctoral Fellowship of the Swedish Research Council, and later with a Project Research Grant for Junior Researchers of the Swedish Research Council. Since 2017, he is Associate Professor at Halmstad University. His research interests include AI for biometrics and security, signal and image processing, feature extraction, pattern recognition, and computer vision. He has been involved in multiple EU/national projects focused on biometrics and human–machine interaction. He has over 100 international contributions at refereed conferences and journals and several book chapters. Dr. Alonso-Fernandez is Associate Editor of IEEE T-IFS and the IEEE Biometrics Council Newsletter, and an elected member of the IEEE IFS-TC. He co-chaired ICB2016, the 9th IAPR International Conference on Biometrics. He is involved in several roles at the European Association for Biometrics (EAB), such as co-chairing the EAB-RPC or jury member of the of the annual EAB Biometrics Awards. [ < g r a p h i c s > ]Josef Bigun is a Full Professor of the Signal Analysis Chair at Halmstad University, Sweden. He received the M.S./Ph.D. degrees from Linköping University, Sweden, in 1983/1988. From 1988 to 1998, he was a Faculty Member with EPFL, Switzerland, as an “Adjoint Scientifique.” He was an Elected Professor of the Signal Analysis Chair (current position) at Halmstad University. His scientific interests broadly include computer vision, texture and motion analysis, biometrics, and the understanding of biological recognition mechanisms. He has co-chaired several international conferences and contributed to initiating the ongoing International Conference on Biometrics, formerly known as Audio and Video-Based Biometric Person Authentication, in 1997. He has contributed as Editorial Board Member of journals, including Pattern Recognition Letters, IEEE Transactions on Image Processing, and Image and Vision Computing. He has been Keynote Speaker at several international conferences on pattern recognition, computer vision, and biometrics, including ICPR. He has served on the executive committees of several associations, including IAPR, and as expert for research evaluations, including Sweden, Norway, and EU countries. He is a Fellow of IAPR.
http://arxiv.org/abs/2307.07224v1
20230714083708
Three-Dimensional Fully Metallic Dual Polarization Frequency Selective Surface Design Using Coupled-Resonator Circuit Information
[ "Ignacio Parellada-Serrano", "Mario Pérez-Escribano", "Carlos Molero", "Pablo Padilla", "Valentín de la Rubia" ]
physics.app-ph
[ "physics.app-ph" ]
B-.05emi-.025em b-.08em T-.1667em.7exE-.125emX compat=newest positioning theoremTheorem lemmaLemma mainlemmaMain Lemma definitionDefinition propositionProposition corollaryCorollary remarkRemark problemProblem proof[1][Proof] #1 2426Communication29.9pc0.5pt1618 Three-Dimensional Fully Metallic Dual Polarization Frequency Selective Surface Design Using Coupled-Resonator Circuit Information Ignacio Parellada-Serrano, Mario Pérez-Escribano, Carlos Molero, Pablo Padilla and Valentín de la Rubia I. Parellada-Serrano, C. Molero, and P. Padilla are with the Department of Signal Theory, Telematics and Communications, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, Granada, Spain (e-mails: [email protected]; [email protected]; [email protected]). M. Pérez-Escribano is with the Telecommunication Research Institute (TELMA), Universidad de Málaga, E.T.S. Ingeniería de Telecomunicación, 29010 Málaga, Spain (e-mail: [email protected]). V. de la Rubia is with the Departamento de Matemática Aplicada a las TIC, ETSI de Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain (e-mail: [email protected]). Manuscript received XX/XX/XXXX; revised XX/XX/XXXX; accepted XX/XX/XXXX. This work has been supported by grant PID2020-112545RB-C54 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR. It has also been supported by grants PDC2022-133900-I00, TED2021-129938B-I00 and TED2021-131699B-I00, and by Ministerio de Universidades and the European Union NextGenerationEU, under Programa Margarita Salas. August 12, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This work employs a new approach to analyze coupled-resonator circuits to design and manufacture a fully metallic dual polarization frequency selective surface (FSS). The proposed filtering structure is composed of a series of unit cells with resonators fundamentally coupled along the z-direction and then repeated periodically in the xy-plane. The fully metallic cascaded unit cell is rigorously analyzed within an infinite periodic environment as a coupled-resonator electromagnetic (EM) circuit. The convenient design of the EM resonators makes it possible to push the evanescent EM field through the metallic structure in the desired frequency band for both polarizations. An FSS prototype is manufactured and measured, and good agreement is found between the simulation results and the final prototype. IEEE Transactions on Antennas and PropagationParellada et. al.: Three-dimensional Fully Metallic Dual Polarization Frequency Selective Surface Design Using Coupled-Resonator Circuit Information Computational prototyping, frequency selective surfaces, simulation and optimization, GRL calibration § INTRODUCTION Evanescent filters emerged as technological solutions to overcome drawbacks associated with propagating filters <cit.>. Evanescent filters significantly improve insertion-loss levels and provide a flatter passband and sharp rejection responses (good selectivity) <cit.> thanks to the inherent high Q-factor. It otherwise invokes narrow-band transmission, which is very useful for the output stages in data transmitters <cit.>. In addition, evanescent filters outperform in a compact size and reduced weight <cit.>, enabling a suitable integration in communication systems <cit.>. Pioneers on this topic date back to the fifties. To the authors' best knowledge, the first complete publication about filters based on evanescent networks was realized by S. B. Cohn in 1957 <cit.>, who explored lumped-elements topologies. Later on, Prof. Craven and his team further studied these systems <cit.>, both from the theoretical and experimental point of view. Rectangular waveguides were established as the prominent technology for evanescent filters due to their simplicity to operate in the cutoff (evanescent) region <cit.>. The development of this technology has continued until the present century, benefiting from the permanent improvement of fabrication techniques and commercial electromagnetic solvers. Special attention deserves the in-line configurations <cit.>, consisting on waveguides loaded with periodically spaced pin-loads <cit.>, ridges <cit.>, dielectric-mushrooms <cit.>, exotic ridges <cit.>, non-resonating modes <cit.>, and frequency-variant couplings <cit.>, among others. Filters based on FSSs are modern solutions proposed to operate in certain scenarios, such as specific spacial and military environments demanding applications for RCS reduction in aircraft or antenna radomes <cit.>. FSSs exhibit more flexibility in covering large areas or adapting to curved surfaces <cit.>. They are otherwise generally based on propagating systems <cit.>, lacking good sensibility and increasing the risk of non-desired interferences. Full-metal 3D designs, as those in <cit.>, arise as promising solutions. Full-metal architectures are more robust to extreme thermal and environmental conditions, being promising candidates for spatial missions <cit.>. In addition, full-metal cells have high Q-factors and may operate in cutoff regime <cit.>, satisfying low-weight, small-size performance and flat-band and sharp-rejection responses. The structure proposed in this paper is a fully metallic FSS, as shown in Fig. <ref>. It is formed by periodic arrangements of square waveguides with dog-bone resonators perforated along the walls. We started from previous FSS design topologies, such as <cit.>. This FSS structure has never been employed for filtering purposes. Furthermore, addressing more than three cascaded resonators in the design becomes a complicated task since controlling the couplings among resonators turns into a real challenge, never mind taking into account both polarizations, where even more resonances need to be handled. This paper proposes a dual polarization FSS design with a wideband performance by increasing the number of resonators to 7 along each polarization. As a result, 14 resonances are present within the FSS unit cell. Due to symmetry considerations, the design of a 7^th order coupled-resonator circuit for each polarization is enough to account for the dual polarization behavior rigorously. A recent electromagnetic coupling matrix technique <cit.> is used for both, approve a tentative initial design, and guide in the final full-wave optimization loop to tune the frequency response of the FSS straightforwardly. § ELECTROMAGNETIC COUPLING MATRIX Due to the 90-degree symmetry along the z-direction (EM wave propagation direction) taken into account in the unit cell detailed in Fig. <ref>, this structure can be rigorously analyzed for one single polarization, and thus obtaining the response for the orthogonal polarization in the same analysis. This is actually the rationale behind the 90-degree symmetry along the z-direction in the unit cell shown in Fig. <ref>. Furthermore, under TE plane wave illumination, this periodic unit cell can be effectively analyzed employing simple PMC and PEC boundary conditions on the corresponding opposite sidewalls, thus dropping the requirement for periodic boundary conditions. Under this scenario, we solve time-harmonic Maxwell's equations in the analysis domain Ω⊂ℝ^3 (which contains the unit cell as shown in Fig. <ref>) to obtain the electromagnetic field as a function of frequency (although we prefer wavenumber notation, k) in Ω. We carry out this full-wave analysis by means of the finite element method (FEM) and the reduced-basis method, where no approximation is taken into account. is used to mesh the analysis domain <cit.>. As a result, a reliable representation of the electric field in Ω for the band of analysis ℬ=[k_min,k_max], is obtained, viz. 𝐄(k) = j k η_0 ∑_k_n^2 ∈ℬ_2A_n /k_n^2 - k^2 𝐞_n + ∑_n = 1^Nβ_n(k) 𝐄(κ_n). ℬ_2 stands for [k^2_min,k^2_max]. η_0 is the intrinsic impedance in the vacuum. k_n and 𝐞_n stand for the eigenresonances and corresponding eigenmodes of the FSS unit cell. Coefficients A_n and β_n(k) are conveniently determined in the full-wave analysis by the reduced-basis method. We refer the interested reader to <cit.> for all the details. This electric field solution (<ref>) allows us to find the impedance matrix transfer function 𝐙(k) with ease. Thus, [ v_1; ⋮; v_M ] = j k η_0 ∑_k_n^2 ∈ℬ_2[ c_n1; ⋮; c_nM ][ c_n1 ⋯ c_nM ]/k_n^2 - k^2 [ i_1; ⋮; i_M ] + 𝐙_out-of-band(k) [ i_1; ⋮; i_M ] 𝐯 = 𝐙(k) 𝐢 = (𝐙_in-band(k) + 𝐙_out-of-band(k) )𝐢 = 𝐯_in-band+𝐯_out-band = 𝐙_in-band(k) 𝐢 + 𝐙_out-of-band(k) 𝐢. We have deliberately split the EM contributions into in-band and out-of-band, namely, 𝐙_in-band and 𝐙_out-of-band. As a result, only in-band eigenmodes are taken into account in 𝐙_in-band, and the remaining EM contributions are left to 𝐙_out-of-band. 𝐯 is also decomposed into these two contributions, namely, 𝐯_in-band and 𝐯_out-band, respectively. The poles k_n in 𝐙_in-band have rank-1 matrix residues, cf. (<ref>). This resembles a Foster impedance representation in 𝐙_in-band <cit.>. This property allows us to find a more insightful state-space dynamical system matrix representation for 𝐙_in-band, viz. [ 0 𝐂; 𝐂^T 𝐀(k); ][ 𝐢; 𝐄; ] = [ 𝐯_in-band/-j k η_0; 0; ] 𝐯_in-band = j k η_0 𝐂𝐀^-1(k) 𝐂^T 𝐢 = 𝐙_in-band(k) 𝐢. 𝐀(k) is a diagonal matrix with entries k_n^2-k^2, namely, 𝐀(k)=𝐊-k^2𝐈𝐝, 𝐊=diag{k_n^2 ∈ℬ_2}, the state space 𝐄 stands for the electric field in the in-band eigenmode basis {𝐞_n, k_n^2 ∈ℬ_2 }, and 𝐂 matrix entries C_pn (C_pn=c_np), stand for the coupling coefficients from ports to each in-band state, i.e., to each eigenmode found in the band of analysis ℬ. As a result, the matrix [ 0 𝐂; 𝐂^T 𝐊; ] gives rise to an electromagnetic coupling matrix description in the transversal topology of the FSS unit cell in the band of interest ℬ. Further manipulations can be carried out to get the electromagnetic coupling coefficients among resonators. See <cit.> for further details. Summing up, every time a full-wave analysis is carried out within the analysis domain Ω, we get valuable design information for free (no additional computations have to be carried out) by using this electromagnetic coupling matrix approach. This design information guides us in the full-wave optimization loop to tune, in few iterations, the target EM frequency response of the infinite FSS. We will get back to this point in the next Section. § BASELINE UNIT CELL The proposed unit cell in this study adopts a fully metallic three-dimensional geometry and is periodically placed on the xy-plane. This unit cell is illustrated in unitcell and is influenced by the structure in <cit.>. It is the fundamental building block for defining the filtering structure, determining its order, and specifying its properties. To prevent internal propagation, the dimensions of the cell are carefully chosen. The transmission response is then regulated through dog-bone-shaped resonators inserted along the walls. Despite its reactive nature, the TE_10-mode (or the corresponding TE_01-mode), is excited by the incident polarization, which can be either vertical or horizontal, along y or x, respectively (see Fig. <ref>). The filter order corresponds to the number of resonators along z-direction for each polarization, which is the direction of propagation. Including resonators along the propagation direction imparts a three-dimensional nature to the overall structure. This design approach leverages the additional dimension to enhance the performance, granting an extra degree of freedom. This design scheme enables reasonable independent manipulation of each polarization. The horizontal polarization is fundamentally influenced by resonators positioned along the x-direction, while the vertical polarization is affected fundamentally by resonators placed along the y-direction. Inside the cell, vertical polarization excites the TE_10-mode whereas the horizontal one excites the TE_01 one as illustrated in Fig. <ref>. Employing exclusively metallic materials in the filtering structure eliminates the inherent losses associated with dielectrics, resulting in improved efficiency. In Fig. <ref>, a preliminary analysis of the unit cell is conducted, assuming periodicity along the direction of propagation z. The dispersion diagram depicts various configurations of the cell, showcasing the changes in its behavior as the dimensions of the resonators are altered. Fig. <ref> illustrates the effect of modifying the resonator's size within a range of ±0.5 mm, while Figs. <ref> and <ref> demonstrate adjustments to the indicated dimensions within 10% of their original values. The results presented in Fig. <ref> reveal that the reference unit cell establishes a passband for the first and higher-order modes, with a distinct stopband between the first and second modes. The presence of wide stopbands is expected due to the opacity of the cell. However, when the resonators approach a resonance, the opaque character diminishes, opening a passband for the EM field. By varying the parameters, it is possible to manipulate the behavior and frequency range of the first mode. At the same time, more significant changes to the resonator shape allow for modifications in the higher-order modes. After modeling and parameterizing the baseline unit cell, the subsequent step involves its utilization for constructing the cascade filter. To achieve the desired filtering properties in a periodic grid, multiple instances of these unit cells are gathered together, resulting in the formation of our 7^th order unit cell (cascaded along the z-direction) for each polarization. This study concentrates explicitly on a filter created by concatenating 7 unit cells, as depicted in Fig. <ref>. As it is well-known, increasing the number of concatenated resonators leads to more resonances. Consequently, this makes it possible to keep a low level of S_11 (below -20 dB in our case) as a design objective in a wide band. Moreover, the transition between the passband and stopband exhibits a more pronounced and abrupt response. The design process primarily involves determining the dimensions of each resonator to achieve the desired couplings and resonances. In this study, we focus on the design of a passband filter centered at 13 GHz with a bandwidth of 1.4 GHz. This example serves as a means of validation to showcase the feasibility of the structure's functionality and manufacturability. Additionally, this investigation sets the foundation for tackling more intricate scenarios in future works. The resulting frequency response of the infinite FSS is detailed in Fig. <ref>, revealing transition bands of approximately 100 MHz. The electromagnetic coupling matrix, as well as the infinite FSS response, of the final design are provided in format and can be accessed from <cit.>. Previous iterations in the full-wave optimization loop are detailed in <cit.>. As previously discussed, we aim to extract comprehensive physical information from a single FEM simulation. Merely obtaining S-parameter information from a full-wave simulation may not be sufficient for achieving an optimal design. Therefore, it becomes crucial to understand the actual internal state of the FSS unit cell from an electromagnetic perspective. In our approach, as discussed in Section <ref>, we utilize a single FEM analysis to derive the electromagnetic coupling matrix, which elucidates the EM behavior among the local EM resonators within the FSS unit cell <cit.>. This electromagnetic coupling matrix is then employed for a tailored synthesis of the infinite FSS response directly in the EM domain. Consequently, we obtain a target electromagnetic coupling matrix that serves as our reference to tune the frequency response of the infinite FSS. Our optimization loop is guided by this target electromagnetic coupling matrix, enabling us to achieve the desired electrical response within a few iterations, requiring only a tiny number of full-wave FEM simulations. §.§ Tolerance Analysis The manufacturing stage plays a pivotal role in producing cascade filters, as the potential manufacturing tolerances can significantly influence their performance. A comprehensive tolerance analysis has been conducted to ensure accurate measurement results aligned with the intended design specifications. This analysis aims to assess manufacturing tolerances' impact, guaranteeing the desired performance. The accuracy restrictions determine that the most convenient fabrication method for such structures is the laser trimming of metal sheets. In this regard, a nominal laser resolution of 25 μ m is considered, labeled as the variation factor Δ s. Subsequently, an analysis using 2Δ s is conducted to assess the continued validity of the structural performance. The tolerance analysis involves modifying the dimensions of one or several resonators by applying the variation factor Δ s. This approach allows us to examine the evolution of S_11 from its original design. For each case, the corresponding variable i assumes a value within the range of its nominal value, s_i, with a deviation of plus or minus the variation factor Δ s, denoted as [s_i- Δ s, s_i+Δ s]. In our study, we follow a systematic approach. Initially, we analyze the tolerances for individual resonators sequentially, starting from the end and progressing toward the middle (four resonators in total). Later, we examine the two outermost resonators concurrently. Subsequently, we consider the simultaneous analysis of the five central resonators. Finally, we study the variation of all the resonators simultaneously. These specific sets of resonators are chosen to validate or refute the hypothesis that the end resonators exhibit a more significant impact on the degradation of the frequency response caused by manufacturing tolerances. Based on these results in Figs. <ref> and <ref>, it is evident that no specific resonator exhibits a significant effect due to manufacturing tolerances. However, it is observed that the resonators at the ends are comparatively less sensitive. This observation holds significance. Note that the end resonators play a crucial role in achieving optimal performance during the simulation stage. Thus, the design demonstrates robustness against unavoidable manufacturing tolerances. § EXPERIMENTAL VALIDATION In order to validate the design results based on the electromagnetic coupling matrix technique, a complete filtering FSS structure is manufactured according to the previous design for the infinite FSS, based on a finite periodic grid of filtering cells made of 7 cascaded resonator cells for two independent polarizations (a 20×20 unit cell prototype is taken into account). For the sake of accuracy, a laser trimming process over steel plates is considered. According to the manufacturer specifications, the trimming tolerances are ≤ 30 μ m, far below 2Δ s. The plates are designed to be assembled as a 3D puzzle, as shown in Fig. <ref>. §.§ Experimental set-up and GRL Calibration The measurements are carried out in the microwave and millimeter measuring facilities of the Smart Wireless Technologies Lab (SWT-Lab) of the University of Granada, as detailed in Fig. <ref>. Before the characterization process, the set-up must be appropriately aligned and calibrated. Two methods are proposed to be used together to perform the system calibration. First, a TRL calibration is performed at the end of the waveguide feeding the antenna. For this purpose, an aluminum kit has been designed for the WR-75 standard, consisting of a short circuit, which will act as a mirror, and a 2 mm long line. This method places the reference planes at the horn input, specifically at the waveguide-horn transition. After this TRL process, a free space GRL calibration is performed. This calibration, initially proposed in <cit.> for material characterization, consists of taking two measurements into account: (i) of the empty FSS holder and (ii) of the FSS holder holding a metal plate, in which total reflection of the wavefront is assumed. A similar process to that in the TRL calibration is performed from both measurements. The main difference is that a time-gating process is carried out to isolate the effects of propagation to the sample holder. Within this gating, the effects of the antennas are included. Specifically, a Hamming window is used to include these effects. After calibration, the reference planes are assumed to be in the material on which the normal incidence of a plane wave is occurring. A simplified scheme of the calibration reference planes is shown in Fig. <ref>. §.§ Measurement results The filtering structure is characterized through its S-parameter matrix behavior measurement, applying the GRL reference plane calibration. The results are shown in Fig. <ref>. As it can be seen, the experimental results exhibit good frequency agreement with the electromagnetic coupling matrix design when dimension tolerances are considered. However, it can be seen a light degradation in the structure matching, as well as a slightly rippled transmission behavior. Both effects are justified by the additional mounting tolerances introduced in the 3D mounting of the puzzle structure. The transmission losses are in accordance with the expected ones in a metallic structure, whose roughness and sheet surface defects yield an equivalent conductivity of 0.2·10^6 S/m. Upon incorporating these effects into the CST simulation, we observe a better fitting between the simulation results and the experimental measurements. In any case, these results allow us to validate the electromagnetic matrix coupling method for designing cascaded 3D FSS filtering structures, where manufacturing of the developed prototype has been adequately addressed. § CONCLUSIONS A wideband dual-polarized fully metallic filter for the Ku-band has been designed and experimentally tested. The design tool is based on a full-wave model of electromagnetic coupling matrix, which allows for the identification of the physical resonators of the filter and the corresponding tuning. The filter under consideration is based on a fully-metallic FSS formed by periodic distributions of square-shaped waveguides with dog-bone-shaped resonators perforated along the walls. The resulting in-line architecture is suitable to control both polarizations in a scenario of high independence. Experimental tests have proven to match the wideband behavior predicted theoretically. Tolerance analysis has been performed, as well as an estimation of the roughness influence in the transmission amplitude to estimate the light degradation exhibited in the passband region. sty/IEEEtran
http://arxiv.org/abs/2307.04029v1
20230708183856
On "Indifference" and Backward Induction in Games with Perfect Information
[ "Nimrod Megiddo" ]
cs.AI
[ "cs.AI" ]
=480pt =0pt 6pt corollaryCorollary definitionDefinition factFact exampleExample lemmaLemma propositionProposition remarkRemark *remarknonumRemark theoremTheorem =0pt #1 1## ###1
http://arxiv.org/abs/2307.04026v1
20230708182039
Dowker-type theorems for disk-polygons in normed planes
[ "Bushra Basit", "Zsolt Lángi" ]
math.MG
[ "math.MG", "52A40, 52A21, 52A30" ]
Dowker-type theorems]Dowker-type theorems for disk-polygons in normed planes B. Basit]Bushra Basit Z. Lángi]Zsolt Lángi Bushra Basit, Department of Algebra and Geometry, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary [email protected] Zsolt Lángi, Department of Algebra and Geometry, Budapest University of Technology and Economics, and MTA-BME Morphodynamics Research Group, Műegyetem rkp. 3., H-1111 Budapest, Hungary [email protected] Partially supported by the National Research, Development and Innovation Office, NKFI, K-147544 grant. [2020]52A40, 52A21, 52A30 A classical result of Dowker (Bull. Amer. Math. Soc. 50: 120-122, 1944) states that for any plane convex body K in the Euclidean plane, the areas of the maximum (resp. minimum) area convex n-gons inscribed (resp. circumscribed) in K is a concave (resp. convex) sequence. It is known that this theorem remains true if we replace area by perimeter, the Euclidean plane by an arbitrary normed plane, or convex n-gons by disk-n-gons, obtained as the intersection of n closed Euclidean unit disks. The aim of our paper is to investigate these problems for C-n-gons, defined as intersections of n translates of the unit disk C of a normed plane. In particular, we show that Dowker's theorem remains true for the areas and the perimeters of circumscribed C-n-gons, and the perimeters of inscribed C-n-gons. We also show that in the family of origin-symmetric plane convex bodies, for a typical element C with respect to Hausdorff distance, Dowker's theorem for the areas of inscribed C-n-gons fails. [ [ ===== § INTRODUCTION For any integer n ≥ 3 and plane convex body K, let A_n(K) (resp. a_n(K)) denote the the infimum (resp. supremum) of the areas of the convex n-gons circumscribed about (resp. inscribed in) K. Verifying a conjecture of Kerschner, Dowker <cit.> proved that for any plane convex body K, the sequences { A_n(K) } and { a_n(K) } are convex and concave, respectively. It was proved independently by L. Fejes Tóth <cit.>, Molnár <cit.> and Eggleston <cit.> that the same statements remain true if we replace area by perimeter, where the last author also showed that these statements are false if we replace area by Hausdorff distance. These results are known to be true also in any normed plane <cit.>. Dowker's theorems have became important in many areas of discrete geometry, in particular in the theory of packing and covering <cit.> and are often used even today (see e.g. <cit.>). Among many variants of Dowker's theorems that have appeared in the literature, we mention only one, which is related to the notion of spindle convexity. This concept goes back to a paper of Mayer <cit.> who, for any given convex body C in Euclidean space, considered sets X with the property that for any points p,q ∈ X, X contains the intersection of all translates of C containing p,q. He called these sets hyperconvex. His paper led to several papers in this topic in the 1930s and 40s, which, however, seems to have been forgotten by the end of the century. In modern times, a systematic investigation of hyperconvex sets was started in the paper <cit.> in 2007 for the special case that C is a closed Euclidean ball, and a similar paper <cit.> appeared in 2013, dealing with any convex body C (see also <cit.>). Hyperconvex sets have appeared in the literature under several different names: spindle convex, strongly convex or superconvex sets (see e.g. <cit.>), and appear in different areas of mathematics <cit.>. In this paper, we follow the terminology in <cit.>, and call a set satisfying the property in Mayer's paper C-spindle convex, or shortly C-convex, and if C is a closed Euclidean unit ball, we call it spindle convex (see Definition <ref>). One of the results related to spindle convex sets is due to G. Fejes Tóth and Fodor <cit.> who extended Dowker's theorems, together with their variants for perimeter, for spindle convex sets; in these theorems the role of inscribed or circumscribed convex n-gons is played by the so-called disk-n-gons, obtained as the intersections of n closed Euclidean unit disks. They also proved similar theorems in hyperbolic or spherical plane. Our main goal is to investigate a normed version of the problem in <cit.>. To state our results, recall that the unit ball of any finite dimensional normed space is a convex body symmetric to the origin o, and any such body is the unit ball of a finite dimensional normed space. Thus, in the paper we choose an arbitrary o-symmetric convex disk C in the real normed space ^2, and work in the normed plane with unit disk C, which we regard as ^2 equipped with the norm ||·||_C of C. In the paper, by a convex disk we mean a compact, convex planar set with nonempty interior. We denote the family of convex disks by , and the family of o-symmetric convex disks by _o. In the paper we regard and _o as topological spaces with the topology induced by Hausdorff distance. Before presenting our results, recall the well-known fact that any finite dimensional real normed space can be equipped with a Haar measure, and that this measure is unique up to multiplication of the standard Lebesgue measure by a scalar (cf. e.g. <cit.>). This scalar does not play a role in our investigation and in the paper (·) denotes 2-dimensional Lebesgue measure. For any C ∈_o and convex polygon Q, we define the C-perimeter of Q as the sum of the lengths of the sides of Q, measured in the norm generated by C. The C-perimeter of a convex disk K ⊂^2, denoted by _C(K), is the supremum of the C-perimeters of all convex polygons inscribed in K. We note that, moving its vertices one by one to the boundary of K in a suitable direction, for any convex polygon Q contained in K one can find a convex polygon Q' inscribed in K with _C(Q) ≤_C(Q'). This shows, in particular, that for any two plane convex bodies K ⊆ L ⊂^2, we have _C(K) ≤_C(L), with equality if and only if K=L (see also <cit.>). Furthermore, it is worth observing that a straightforward modification of Definition <ref> can be used to define the C-length of a rectifiable curve Γ⊂^2, denoted by _C(Γ). Our next definition can be found in <cit.> and its origin goes back to <cit.>. Let C ∈_o and consider two (not necessarily distinct) points p, q ∈^2 such that a translate of C contains both p and q. Then the C-spindle (denoted as [p,q]_C) of p and q is the intersection of all translates of C that contain p and q. If no translate of C contains p and q, we set [p,q]_C = ^2. We call a set K ⊂^2 C-spindle convex (or shortly C-convex), if for any p,q ∈ K, we have [p,q]_C ⊆ K. We recall from <cit.> that a closed set in ^2 different from ^2 is C-convex if and only if it is the intersection of some translates of C. The intersection of n translates of C is called a C-n-gon for n ≥ 3. In our next definition and throughout the paper, (·) denotes standard Lebesgue measure. Let n ≥ 3 and let K be a C-convex disk in ^2, where C ∈_o. We set Â_n^C(K) = inf{(Q) : Q is a C-n-gon circumscribed about K }; â_n^C(K) = sup{(Q) : Q is a C-n-gon inscribed in K }; P̂_n^C(K) = inf{_C(Q) : Q is a C-n-gon circumscribed about K }; p̂_n^C(K) = sup{_C(Q) : Q is a C-n-gon inscribed in K }. For any C ∈_o and C-convex disk K, the sequences {Â_n^C(K) }, {P̂_n^C(K) } are convex, and the sequence {p̂_n^C(K) } is concave. That is, for any n ≥ 4, we have Â_n-1^C(K)+Â_n+1^C(K) ≥ 2 Â_n^C(K), P̂_n-1^C(K)+P̂_n+1^C(K) ≥ 2 P̂_n^C(K), and p̂_n-1^C(K)+p̂_n+1^C(K) ≤ 2 p̂_n^C(K). As a consequence of Theorem <ref>, we prove Theorem <ref>, and recall that similar statements have been derived in <cit.> for the Euclidean areas of inscribed and circumscribed polygons from the classical results of Dowker in <cit.> (for their spindle convex variants, see <cit.>). Let n ≥ 3 and k ≥ 2. Assume that k is a divisor of n and both K and C have k-fold rotational symmetry. Then there are C-n-gons Q^A, Q^P circumscribed about K which have k-fold rotational symmetry, and (Q^A)= Â_n^C(K) and _C(Q^P)= P̂_n^C(K). Similarly, there is a C-n-gon Q^p inscribed in K which has k-fold rotational symmetry, and _C(Q^p)= p̂_n^C(K). Before our next theorem, we remark that in a topological space ℱ, a subset is called residual if it is a countable intersection of sets each of which has dense interior in ℱ. The elements of a residual subset of ℱ are called typical. Our next result shows that Dowker's theorem for the sequence { A_n^C(K) } fails in a strong sense. A typical element C of _o satisfies the property that for every n ≥ 4, there is a C-convex disk K with â_n-1^C(K) + â_n+1^C(K) > 2 â_n^C(K). The structure of the paper is as follows. In Section <ref>, we present the necessary notation and prove some lemmas. Then in Sections <ref> and <ref> we prove Theorems <ref> and <ref>, and Theorem <ref>, respectively. Finally, in Section <ref>, we collect our additional remarks and propose some open problems. § PRELIMINARIES In the paper, for simplicity, for any x,y ∈^2, we denote by [x,y] the closed segment with endpoints x,y. We equip ^2 also with a Euclidean norm, which we denote by ||·||, and use the notation B^2 for the Euclidean closed unit disk centered at o. Recall that the Euclidean diameter of a compact set X ⊂^2 is the Euclidean distance of a farthest pair of points in X. If we replace Euclidean distance by distance measured in the norm of C, we obtain the C-diameter of X. Recall that for any set X ⊆^2, the C-convex hull, or shortly C-hull is the intersection of all C-convex sets that contain C. We denote it by _C(X), and note that it is C-convex, and if X is closed, then it coincides with the intersection of all translates of C containing X <cit.>. In the following list we collect some elementary properties of C-spindles and C-n-gons that we are going to use frequently in the paper. We have the following. (a) For any x,y ∈^2 with ||x-y||_C ≤ 2, [x,y]_C is the intersection of at most two translates of C, and if [x,y]_C is a translate of C, then ||x-y||_C=2. (b) Conversely, a nonempty intersection of at most two translates of C is the C-spindle of two (not necessarily distinct) points. (c) For any x, y ∈^2, [x,y]_C=[x,y] if and only if a translate of C contains [x,y] in its boundary. (d) If [x,y]_C ≠ [x,y], then [x,y]_C is a centrally symmetric convex disk whose boundary consists of two arcs, connecting x and y, that are contained in the boundary of some translates of C. (e) Any C-n-gon is the C-hull of at most n points contained in a translate of C, and vice versa. Let x,y ∈ C ∈_o, with ||x-y||_C < 2. Then, for any sequences x_m → x, y_m → y, C_m → C with x_m,y_m ∈^2 and C_m ∈_o, we have [x_m,y_m]_C_m→ [x,y]_C. We observe that the statement in Remark <ref> does not necessarily hold if ||x-y||_C = 2. As an example, we can choose C as a parallelogram, x_m=x and y_m=y as the midpoints of two opposite sides S_1, S_2 of C, and { C_m } as a sequence of o-symmetric hexagons inscribed in C whose elements intersect S_1 and S_2 only in x and y, respectively. For any n ≥ 4, let ^n_a denote the subfamily of the elements C of _0 satisfying the Dowker-type inequality â_n-1^C(K) + â_n+1^C(K) ≤ 2 â_n^C(K) for any C-convex disk K. We define ^n_A, ^n_p and ^n_P similarly. Our first lemma describes the topological properties of these families. For any n ≥ 4, ^n_a, ^n_A, ^n_p and ^n_P are closed. We prove the assertion only for ^n_a, as for the other quantities the proof is analogous. Let C ∉^n_a, and suppose for contradiction that there is a sequence C_m ∈_a^n with C_m → C. Since C ∉^n_a, there is a C-convex disk K satisfying â_n-1^C(K) + â_n+1^C(K) > 2 â_n^C(K). By Remark <ref>, if K contains points at C-distance equal to 2, then K is a C-spindle, which yields that â_j(K) = (K) for any j ≥ 3. Thus, according to our assumptions, K does not contain points at C-distance equal to 2, i.e its C-diameter is strictly less than 2. On the other hand, since K is C-convex, K is the intersection of the translates of C that contain it. Thus, there is a set X ⊂^2 such that K = ⋂_x ∈ X (x+C). Let K_m = ⋂_x ∈ X (x+C_m). Then, clearly, K_m is C_m-convex, and K_m → K. For j=n-1,n+1, let Q_j be a C-j-gon inscribed in K such that (Q_j)=â_j^C(K). Then, as K_m → K and C_m → C, there are sequences { Q_n-1^m } and { Q_n+1^m } such that for j=n-1,n+1, Q_j^m is a C_m-j-gon inscribed in K_m, and Q_j^m → Q_j. By the properties of Hausdorff distance, the C_m-diameter of K_m is strictly less than 2 if m is sufficiently large. Then we can apply Remark <ref>, and obtain that (Q_j^m) →(Q_j) for j=n-1,n+1. From this, we have (Q_n-1^m)+(Q_n+1^m) →â_n-1^C(K) + â_n+1^C(K). On the other hand, since C_m ∈^n_a, there is a sequence { Q_n^m } such that Q_n^m is a C_m-n-gon inscribed in K_m, and 2 (Q_n^m) ≥(Q_n-1^m)+(Q_n+1^m). By compactness, we may assume that { Q_n^m } converges to a C-n-gon Q_n. Clearly, Q_n is contained in K, and by Remark <ref>, (Q_n^m) →(Q_n). Thus, â_n-1^C(K) + â_n+1^C(K) ≤ 2 (Q_n) ≤ 2 â_n^C(K); a contradiction. Lemma <ref> readily yields Corollary <ref>, since the intersection of arbitrarily many closed sets is closed. The family ⋃_n=4^∞_a^n of the elements C of _0 satisfying â_n-1^C(K) + â_n+1^C(K) ≤ 2 â_n^C(K) for all n ≥ 4 and all C-convex disks K is closed in _0. Similar statements hold for the families ⋃_n=4^∞_p^n, ⋃_n=4^∞_A^n and ⋃_n=4^∞_P^n. Let C ∈_o, and let x,y be points with ||x-y||_C ≤ 2. Then the arc-distance ρ_C (x,y) of x,y with respect to C (or shortly, C-arc-distance of x and y) is the minimum of the C-length of the arcs, with endpoints x,y, that are contained in z+(C) for some y ∈^2. For any x,y ∈^2 with ||x-y||_C ≤ 2, if [x,y]_C ≠ [x,y], then ρ_C (x,y) = 1/2_C ([p,q]_C). Furthermore, if [x,y]_C = [x,y], then ρ_C(x,y)=||x-y||_C. We recall the following version of the triangle inequality from <cit.>. [Lángi, Naszódi, Talata] Let C ∈_0, and let x,y,z be points such that each pair has a C-arc-distance. (a) If y ∈ [x,z]_C, then ρ_C(x,y)+ρ_C(y,z) ≤ρ_C(x,z). (b) If y ∈ [x,z]_C, then ρ_C(x,y)+ρ_C(y,z) = ρ_C(x,z). (c) If y ∉ [x,z]_C and C is smooth, then ρ_C(x,y)+ρ_C(y,z) ≥ρ_C(x,z). We start with a consequence of this inequality. Let p,q,r,s ∈^2 be distinct points contained in a translate of the smooth o-symmetric convex disk C, and assume that _C {p,q,r,s} contains all of them and in this counterclockwise order. Then ρ_C(p,q)+ρ_C(r,s) ≤ρ_C(p,r)+ρ_C(q,s). Note that according to our conditions, the two C-arcs in the boundary of [p,r]_C intersect both C-arcs consisting of the boundary of [q,s]_C. Let s' denote the intersection point of one of the C-arcs in [p,r]_C and one of the C-arcs in [q,s], where the arcs are chosen to satisfy s' ∈_C { p,q,r } and s' ∈_C {p,r,s}. Then s' ∉ [p,q]_C and s' ∉ [r,s]_C. Since [s,s']_C, [q,s']_C ⊂ [q,s]_C, it is easy to see that p,q,s', and also r,s,s' are in C-convex position. Thus, by Lemma <ref>, we have ρ_C(p,q) ≤ρ_C(p,s')+ρ_C(q,s') and ρ_C(r,s) ≤ρ_C(r,s') + ρ_C(s,s'), implying the assertion. In the following lemma, let ^1 denote the Euclidean unit circle centered at the origin. For simplicity, if x,y ∈^1, we denote by xy the Euclidean closed circle arc obtained as the orbit of x when it is rotated around o in counterclockwise direction until it reaches y. Let 𝒮 denote the family of closed circle arcs xy of S. Furthermore, we say that a function f : 𝒮→ has a k-fold rotational symmetry for some positive integer k, if for any S,S' ∈𝒮, where S' is a rotated copy of S in counterclockwise direction with angle 2π/k, we have f(S)=f(S'). Lemma <ref> can be regarded as a functional form of Dowker's theorems. Let f : 𝒮→ be a bounded function with f(xx)=0 for all x ∈^1. For any integer n ≥ 3, let M_n = sup{∑_S ∈ X f( S ) : X ⊂𝒮 is a tiling of ^1 with |X| = n }. If for any x_2x_3⊂x_1x_4, we have f(x_1x_3)+f(x_2x_4) ≥ f(x_1x_4)+f(x_2x_3), then the sequence { M_n } is concave. Furthermore, if in addition, there is some positive integer k such that k | n and f has k-fold rotational symmetry, and there is an n-element tiling X of ^1 such that M_n = ∑_S ∈ X f(S) then there is an n-element tiling X' of ^1 with k-fold rotational symmetry such that M_n = ∑_S ∈ X' f(S). Before the proof, we remark that X ⊂𝒮 is called an m-tiling of ^1 for some positive integer m if every point of ^1 belongs to at least m members of X, and to the interiors of at most m members of X. To prove the assertion for { M_n }, we need to show that M_n-1+M_n+1≤ 2M_n is satisfied for any n ≥ 4. In other words, we need to show that for any tilings X={x_0x_1, …x_n-2x_n-1}, Y={y_0y_1, …y_n y_n+1} of ^1, there are tilings Z={z_0z_1, …z_n-1z_n} and W={w_0w_1, …w_n-1w_n} of ^1 such that ∑_i=1^n-1 f(x_i-1x_i) + ∑_i=1^n+1 f(y_i-1y_i) ≤∑_i=1^n f(z_i-1z_i) + ∑_i=1^n f(w_i-1w_i). Note that the union A_0 of the two tilings is a 2-tiling of ^1. Assume that x_1, x_2, …, x_n-1, and y_1,y_2, …, y_n+1 are in this counterclockwise order in ^1, and that y_1 ∈x_1x_2. Due to the possible existence of coinciding points in the above two sequences, we unite these sequences as a single sequence v_1, v_2, …, v_2n in such a way that the points are in this counterclockwise order in ^1, v_1=x_1, and removing the x_i (resp. y_j) from this sequence we obtain the sequence y_1, …, y_n+1 (resp. x_1, …, x_n-1). In the proof we regard this sequence as a cyclic sequence, where the indices are determined mod 2n, and, with a little abuse of notation, we say that v_iv_j covers v_kv_l only if v_kv_l⊆v_iv_j and i < k < l < j < i+2n. Our main goal will be to modify the 2-tiling A_0 in such a way that the value of f does not decrease but the number of covering pairs strictly decreases. Note that since A_0 is the union of two tilings consisting of (n-1) and (n-1) arcs, respectively, A_0 contains covering pairs. Assume that v_iv_j covers v_kv_l. Then let A_1 denote the 2-tiling of ^1 in which v_iv_j and v_kv_l are replaced by v_iv_l and v_kv_j. According to our conditions, ∑_S ∈ A_0 f(S) ≤∑_S ∈ A_1 f(S), and the number of covering pairs in A_1 is strictly less than in A_0. Repeating this procedure we obtain a 2-tiling A_t of ^1 for which ∑_S ∈ A_0 f(S) ≤∑_S ∈ A_t f(S) and which does not contain covering pairs. Then, A_t decomposes into the two tilings {v_1,v_3, v_3v_5, …, v_2n-1v_1} and {v_2,v_4, v_4v_6, …, v_2nv_2}, each of which contains exactly n arcs. This proves the assertion for { M_n }. Now we prove the second part. Let X be an n-element tiling of ^1 such that M_n = ∑_S ∈ X f(S). Assume that X does not have k-fold rotational symmetries. For i=1,2,…, k, let X_i denote the rotated copy of X by 2iπ/k in counterclockwise direction. Then Y= ⋃_i=1^k X_i is a k-fold tiling of ^1 with k-fold rotational symmetry, and ∑_S ∈ Y f(S) = k ∑_S ∈ X f(S). Since X has no k-fold rotational symmetry, Y contains covering pairs, and we may apply the argument in the previous paragraph. We remark that an analogous proof yields Lemma <ref>, the proof of which we leave to the reader. Let f : 𝒮→ be a bounded function with f(pp)=0 for all p ∈^1. For any integer n ≥ 3, let m_n = inf{∑_S ∈ X f( S ) : X ⊂𝒮 is a tiling of ^1 with |X| = n }. If for any x_2x_3⊂x_1x_4, we have f(x_1x_3)+f(x_2x_4) ≤ f(x_1x_4)+f(x_2x_3), then the sequence { m_n } is convex. Furthermore, if in addition, there is some positive integer k such that k | n, and f has k-fold rotational symmetry, and there is an n-element tiling X of ^1 such that m_n = ∑_S ∈ X f(S) then there is a tiling X' of ^1 with k-fold rotational symmetry such that m_n = ∑_S ∈ X' f(S). In the next lemma, by the partial derivatives (∂_p f) (p_0q_0) (resp. (∂_q f) (p_0q_0)) of the function f(pq) at p_0q_0, we mean the derivative of the function f(p(t)q_0) (resp. f(q(t)p_0)) at t=0, where p(t) (resp. q(t)) is the rotated copy of p_0 (resp. q_0) around o by angle t in counterclockwise direction. Let f : 𝒮→ be a bounded function with f(pp) = 0 for all p ∈^1. Assume that for any p_0q_0∈^1, where p_0 ≠ q_0, (∂_p ∂_q f)(p_0q_0) is a continuous function of p_0q_0 in both variables. Then, for any x_1, x_2, x_3, x_4 ∈^1 in this counterclockwise order, we have f(x_1x_3)+f(x_2x_4) ≥ f(x_1x_4)+f(x_2x_3) if and only if (∂_p ∂_q f)(p_0q_0) ≥ 0 for all p_0 ≠ q_0. Similarly, for any x_1, x_2, x_3, x_4 ∈^1 in this counterclockwise order, we have f(x_1x_3)+f(x_2x_4) ≤ f(x_1x_4)+f(x_2x_3) if and only if (∂_p ∂_q f)(p_0q_0) ≤ 0 for all p_0 ≠ q_0. We prove only the first part. Assume that (∂_p ∂_q f)(p_0q_0) ≥ 0 for all p_0 ≠ q_0. Let x_2x_3⊂x_1x_4. Then, by the Newton-Leibniz Theorem we have 0 ≤∫_x_3^x_4∫_x_1^x_2 (∂_p ∂_q f)(p_0q_0) d p_0 d q_0 = f(x_2x_4)-f(x_2x_3)-f(x_1x_4)+f(x_1x_3). Furthermore, if we have (∂_p ∂_q f)(p_0q_0) < 0 for some p_0 ≠ q_0, then, by continuity and the same argument, there are some points x_1,x_2 and x_3,x_4 sufficiently close to p_0 and q_0, respectively, such that x_2x_3⊂x_1x_4, and 0 > f(x_2x_4)-f(x_2x_3)-f(x_1x_4)+f(x_1x_3). § PROOF OF THEOREMS <REF> AND <REF> Note that by Lemma <ref> and Corollary <ref>, it is sufficient to prove Theorem <ref> for any everywhere dense subset of _o, and applying a similar consideration, we have the same for Theorem <ref>. Thus, we may assume that C has C^∞-class boundary and strictly positive curvature. Under this condition, the quantities defined in Definition <ref> are continuous functions of K for any fixed value of n, and thus, we may assume that K has C^∞-class boundary, and the curvature of (K) at any point p is strictly greater than the curvature of (C) at the point q with the same outer unit normal as p. Under the above conditions, for any points p,q ∈ (K), [p,q]_C ∖{ p,q }⊂ (K). In the proof we identify ^1 with the set / { 2kπ : k ∈ℤ}. Let us parametrize (K) as the curve Γ : ^1 →^2, where the outer unit normal vector at Γ(φ) is (cosφ, sinφ). Then, for any two points Γ(φ_1), Γ(φ_2) with φ_1 < φ_2 < φ_1+2π, let us denote the arc of Γ connecting them in counterclockwise direction by Γ|_[φ_1,φ_2]. Furthermore, recall <cit.>, stating that K is the intersection of the translates of C containing it. Thus, for any φ∈ [0,2π], there is a unique translate x+C of C containing K with Γ(φ) ∈ (x+C). We denote this translate by C(φ)=x(φ)+C, and call it the supporting C-disk of K at Γ(φ) (see Figure <ref>). We define the following regions: (i) r(φ_1,φ_2) is the closure of the connected component of K ∖ [Γ(φ_1), Γ(φ_2)]_C containing Γ|_[φ_1,φ_2]; (ii) R(φ_1,φ_2) is the closure of the connected component of (C(φ_1) ∩ C(φ_2) ∖ K) containing Γ|_[φ_1,φ_2]; (1) p(φ_1,φ_2) = _C(r(φ_1,φ_2) - _C(Γ|_[φ_1,φ_2]); (2) A(φ_1,φ_2) = (R(φ_1,φ_2)); (3) P(φ_1,φ_2) = _C(R(φ_1,φ_2) - _C(Γ|_[φ_1,φ_2]). §.§ The proof of Theorems <ref> and <ref> for Â_n^C(K) Let I[X] : ^2 → denote the indicator function of X ⊂^2. Then it can be seen directly that for any φ_1 < φ_2 < φ_3 < φ_4 < φ_1+2π, the function I[R(φ_1,φ_4)] + I[R(φ_2,φ_3)] - I[R(φ_1,φ_3)]- I[R(φ_2,φ_4)] has nonnegative values at every point. Thus, the conditions of Lemma <ref> are satisfied, implying the statement. §.§ The proof of Theorems <ref> and <ref> for p̂_n^C(K) Let φ_1 < φ_2 < φ_3 < φ_4 < φ_1+2π. Then, by Lemma <ref>, ρ_C(Γ(φ_1),Γ(φ_4))+ρ_C(Γ(φ_2),Γ(φ_3)) ≤ρ_C(Γ(φ_1),Γ(φ_3))+ρ_C(Γ(φ_2),Γ(φ_4)). Thus, the conditions of Lemma <ref> are satisfied, implying our statement. §.§ The proof of Theorems <ref> and <ref> for P̂_n^C(K) By Lemmas <ref> and <ref>, it is sufficient to prove that for any φ_1 < φ_2 < φ_1+π, the function ∂_φ_1∂_φ_2 P is a continuous nonpositive function. In the remaining part of the subsection we prove this property. For brevity, for any α < β < α +2π, we define z(α,β) as the intersection point of (C(α)) and (C(β)) contained in the boundary of R(α,β). First, observe that P(φ_1,φ_2) = ρ_C(Γ(φ_1),z(φ_1,φ_2))+ ρ_C(z(φ_1,φ_2),Γ(φ_2)). Clearly, since C has C^∞-class boundary, ρ_C(·,·) is a C^∞-class function, implying that P(φ_1,φ_2) is C^∞-class, and ∂_φ_1∂_φ_2 P is continuous. Now, let 0 < | Δ_1| , |Δ_2 | ≤ε for some sufficiently small ε > 0, and set p=z(φ_1,φ_2), q_1=z(φ_1,φ_2+Δ_2), q_2 = z(φ_1 + Δ_1,φ_2) and q=z(φ_1+Δ_1,φ_2+Δ_2). To prove the assertion, it is sufficient to prove that 0 ≥1/Δ_1( P(φ_1+Δ_1,φ_2+Δ_2)-P(φ_1+Δ_1,φ_2)/Δ_2 - P(φ_1,φ_2+Δ_2)-P(φ_1,φ_2)/Δ_2) = = 1/Δ_1 Δ_2( P(φ_1+Δ_1,φ_2+Δ_2) - P(φ_1+Δ_1,φ_2) - P(φ_1,φ_2+Δ_2) + P(φ_1,φ_2) ). We do it in the case that Δ_1 < 0 and Δ_2 > 0, in the other cases a straightforward modification yields the assertion. Note that in this case it is sufficient to show that ρ_C(p,q_1)+ρ_C(p,q_2) ≤ρ_C(q,q_1)+ρ_C(q,q_2). For i=1,2, let v_i denote the tangent vector of C(φ_i) at p pointing `towards' q_i in its boundary, and let w_i denote the tangent vector of K at Γ(φ_i) pointing towards p in (C(φ_i)). Let C(φ)= x(φ)+C. Then lim_Δ→ 0 ± 0x(φ+Δ)-x(φ)/|x(φ+Δ)-x(φ)| = ± v for any value of φ, where v is the unit tangent vector of (K) at Γ(φ) pointing in the positive direction. Let Θ(φ) denote the point of (C) with outer unit normal vector (cosφ, sinφ). Then x(φ)=Γ(φ)-Θ(φ) and more generally, x(φ+Δ)-x(φ) = ( Γ(φ+Δ)- Γ(φ) ) - ( Θ(φ+Δ)- Θ(φ) ). Note that lim_Δ→ 0 ± 0Γ(φ+Δ)- Γ(φ)/|Γ(φ+Δ)- Γ(φ)| = lim_Δ→ 0 ± 0Θ(φ+Δ)- Θ(φ)/|Θ(φ+Δ)- Θ(φ)| = ± v, and, by the choice of the parametrization of Γ and Θ, lim_Δ→ 0|Θ(φ+Δ)- Θ(φ)|/|Γ(φ+Δ)- Γ(φ)| = κ_Γ(φ)/κ_Θ(φ), where κ_Γ(φ) and κ_Θ(φ) denote, the curvature of Γ and Θ at Γ(φ) and Θ(φ),respectively. Thus, the assertion follows from our assumption that κ_Θ(φ) ≠κ_Γ(φ). By Remark <ref>, C(φ_1) ∩ C(φ_2) is the C-spindle of p and another point, which we denote by p'. By convexity, the tangent vectors of (C(φ_1)) pointing in counterclockwise direction, turn in counterclockwise direction from p to p'. Thus, the directions of the vectors v_2, w_1, v_1 are in this order in counterclockwise orientation, and the same holds for the vectors v_2, w_2, v_1. For i=1,2, let C(φ_i+Δ_i)=y_i + C(φ_i). Then, by Lemma <ref>, if Δ_i is sufficiently small, we have that the vectors y_1,y_2 are between v_1 and v_2 according to counterclockwise orientation. Consider the translate C_i' of C(φ_i) by q_i-p. The boundary of this translate contains q_i, and v_i is a tangent vector of C_i' at q_i. Thus, if q' = q_1+q_2-p (i.e. q' is the unique point for which p,q_1,q',q_2 are the vertices of a parallelogram in this counterclockwise order), then q' lies in the boundary of both C_1' and C_2'. On the other hand, by our observation about the tangent lines, if Δ_i are sufficiently small, then q' is contain in Q. By symmetry, ρ_C(p,q_1) = ρ_C(q',q_1) and ρ_C(p,q_2) =ρ_C(q',q_2), and thus, the required inequality follows from the remark after Definition <ref>. § PROOF OF THEOREM <REF> We prove the statement in several steps. For brevity, for any points z_1,z_2, …, z_k ∈^2, we set [z_1,z_2,…,z_k] = { z_1,z_2,…, z_k } and [z_1,z_2,…,z_k]_C = _C { z_1,z_2,…, z_k }. Step 1. Let us fix a Cartesian coordinate system, and consider the points p_1=(0,-1-t), p_2=(2.1,-0.9-t), p_3=(t+2,-1), p_4=(t+2,1), p_5=(2.1, 0.9+t), p_6=(0,1+t), q_1=(t,-1), q_2=(t,1), q_3=(-t,1) and q_4=(-t,-1) (see Figure <ref>). In the construction we assume that t is a sufficiently large positive value. We define the hexagon H= [p_1,q_1,q_2,p_6,q_3,q_4] and the octagon K_1 = [p_1,p_2,…,p_6,q_3,q_4]. Note that H ⊂ K_1, and set G = (K_1) ∖(H), and G'=(K_1) ∩(H). In the following, D_1 denotes the Euclidean diameter of K_1. We define C_1 as an o-symmetric convex 14-gon with vertices x_1,x_2,…,x_14 in counterclockwise order such that (a) x_1 and x_8 are on the negative and the positive half of the y-axis, respectively; (b) C_1 is symmetric to both coordinate axes; (c) the sides [x_1,x_2], [x_2,x_3], [x_3,x_4], [x_4,x_5] are parallel to [p_1,p_2], [p_1,p_3], [p_2,p_3] and [p_3,p_4], respectively; (d) we have ||x_2-x_1||, ||x_3-x_2||, ||x_4-x_3|| > D_1, and ||x_5-x_4||=2, i.e. [x_4,x_5] is a translate of [p_3,p_4]. Note that by our conditions, for any two point u,v ∈ G, each of the two C_1-arcs in the boundary of [u,v]_C_1 consists of translates of subsets of at most two consecutive sides of C_1, or they contain translates of [x_4,x_5] and possibly translates of subsets of the sides [x_3,x_4] and [x_5,x_6]. In particular, [p_1,p_6]_C_1 = H. We estimate ([p_1,q,p_6]_C_1) for any q ∈ G with nonnegative y-coordinate. In the following p̅=(0,t+2) denotes the midpoint of [p_3,p_4]. Case 1: q ∈ [p̅,p_4]. Then ([p_1,q,p_6]_C_1) consists of G', parts of the segments [p_1,p_3] and [p_4,p_6], and two segments with q as an endpoint, parallel to [p_2,p_3] and [p_4,p_5], respectively. Thus, ([p_1,q,p_6]_C_1) is maximal if q=p̅, implying that ([p_1,q,p_6]_C_1) ≤([p_1,p̅,p_6]_C_1) = (H)+3/2t + 3 Case 2: q ∈ [p_4,p_5]. Assume that the x-coordinate of q is at least t+1. Then the curve ([p_1,q,p_6]_C_1) consists of G', a segment containing [p_1,q_1], a segment parallel to [p_3,p_4] and ending at q, and segment parallel to [p_4,p_6] and ending at q, and a subset of [p_5,p_6]. Observe that if t is sufficiently large, in this case ([p_1,q,p_6]_C_1) is maximal if the x-coordinate of q is equal to t+1. A similar consideration shows that if the x-coordinate of q is at most t+1, then ([p_1,q,p_6]_C_1) is maximal if q=p_5. Thus, in Case 2 we have ([p_1,q,p_6]_C_1) ≤([p_1,p_5,p_6]_C_1) = = (H)+ ([q_2,p_4,p_5,p_6)=1/2( (H)+(K_1) ) - 2 Case 3: q ∈ [p_5,p_6]. Then ([p_1,q,p_6]_C_1) consists of G', a segment parallel to [q_2,p_6] and ending at q, a segment containing [p_1,q_1] as a subset, and a translate of [p_3,p_4]. Thus, in this case ([p_1,q,p_6]_C_1) is maximal if q=p_5, and we have ([p_1,q,p_6]_C_1) ≤([p_1,p_5,p_6]_C_1) = 1/2( (H)+(K_1) ) - 2. Combining our results, if t is sufficiently large, for any q,q' ∈ G ([p_1,q,p_6]_C_1) + ([p_1,q',p_6]_C_1) ≤(H)+(K_1) - 4 < < (H)+(K_1) = ([p_1,p_6]_C_1)+([p_1,p_2,p_5,p_6]_C_1), where we used the observation that [p_1,p_2,p_5,p_6]_C_1 = K_1. In the remaining part of the construction, we fix t in such a way that (<ref>) is satisfied. Step 2. In the next step, based on Step 1, we construct some C_2 ∈_o and a C_2-convex disk K_2 such that â_3^C_2(K_2) + â_5^C_2(K_2) > 2 â_4^C_2(K_2). Let p_7 = (-s,0), where s is sufficiently large, and set K_2 = (K_1 ∪{ p_7 }) (see Figure <ref>). Let D_2 denote the Euclidean diameter of K_2, and let C^+_1 (resp. C^-_1) denotes the set of the points of (C_1) with nonnegative (resp. nonpositive) x-coordinates. We define C_2 as follows: (a) C_2 is symmetric to both coordinate axes. (b) (C_2) contains some translates u+ C^+_1 and -u+C^-_1, where u points in the direction of the positive half of the x-axis. We set w_3=u+x_1. (c) In addition to the above two translates, (C_2) consists of segments [w_1,w_2], [w_2,w_3] and their reflections about one or both of the coordinate axes, such that [w_1,w_2], [w_2,w_3] are parallel to [p_6,p_7] and [p_5,p_7], respectively, and |w_1-w_2|, |w_2-w_3| > D_2. We remark that if s is sufficiently large, then there is some C_2 ∈_o satisfying the above conditions, and K_2 is C_2-convex. In the following, let Q_4 = [z_1,z_2,z_3,z_4]_C_2 denote a maximal area C-4-gon inscribed in K_2. Let H'= (H ∪{ p_7 }) =[p_1,p_6,p_6]_C_2 and observe that K_2 = [p_1,p_2,p_5,p_6,p_7]_C_2. Then, to show the inequality in (<ref>), it is sufficient to show that (H')+(K_2) > 2 (Q_4). Let Q = [p_1,p_5,p_6,p_7]_C_2. By the consideration in Step 1, we have that (Q) = 1/2 ((H')+(K_2))-2. Thus, we have (Q_4) ≥1/2 ((H')+(K_2))-2. Let us define the points v_1 and v_6 as the images of p_1 and p_6, respectively, under the homothety with center p_7 and homothety ratio 1/√(s). An elementary computation shows that then v_1 = ( -(1-1/√(s))s, -1+t/√(s)) ∈ [p_1,p_7] and v_6 = ( -(1-1/√(s))s, 1+t/√(s)) ∈ [p_6,p_7]. Note that since |v_2-v_1| = 2(1+t)/√(s) < 2 if s is sufficiently large, and (C_2) contains two vertical segments of length 2, we may assume that [v_1,v_6]_C_2 = [v_1,v_6]. In other words, we may assume that there is a translate of C that contains K_2 ∖ [v_1,p_7,v_6] and does not overlap [v_1,p_7,v_6]. Thus, if z_i ∉ [v_1,p_7,v_6] for any 1 ≤ i ≤ 4, then Q_4 ⊆ K_2 ∖ [v_1,p_7,v_6], implying that in this case (Q_4) ≤(K_2) - ([v_1,p_7,v_6]) = (K_2) - 2 √(s)(1+t) < 1/2 ((H')+(K_2))-2; a contradiction. Consequently, in the following we may assume that z_4 ∈ [v_1,p_7,v_6]. Let v'_5 and v'_7 be the images of p_5 and p_7, respectively, under the homothety with center p_6 and ratio 1/√(s). Note that since there is a side of C parallel to [v_5',v_7'], we have [v_5',v_7']_C_2= [v_5',v_7'], and, as in the previous paragraph, if z_i ∉ [v_1,p_7,v_6] for any 1 ≤ i ≤ 4, then (P_4) ≤(K_2) - ([v_5',v_7',p_6]). On the other hand, we have |p_6-p_7| > s and that the length of the corresponding height of [p_5,p_6,p_7] is greater than 0.1 by the definition of p_5. Thus, ([v_5',v_7',p_6])=([p_5,p_6,p_7])/√(s^2) > 0.1 √(s), implying that since (Q_4) ≥(Q), which otherwise by our inequalities does not hold if s is sufficiently large, we may assume that some z_i, say z_3, is an element of [v_1,p_7,v_6]. We obtain similarly that if s is sufficiently large, some z_i, say z_1, is contained in the triangle [v_7”,p_1,v_2”], where v_7” and v_2” are the images of p_7 and p_2, respectively, under the homothety with center p_1 and ratio 1/√(s). These observations, the consideration in Step 1, and the inequality (Q_4) ≥(Q) yield that as s →∞, we have z_1 → p_1, z_3 → p_6 and z_4 ∈ [v_1,p_7,v_6], and min{ | z_2 - p_2|, |z_2-p_5| }→ 0, implying that in this case (Q_4) →(Q). This shows that if s is sufficiently large, then (H')+(K_2) > 2 (Q_4). Before proceeding to the final step, we make two important observations that we are going to use. Here, by C^+_2 and C^-_2, we denote the parts of (C_2) contained in the closed half planes { x ≥ 0} and { x ≤ 0}, respectively. (1) A straightforward modification of the construction in Step 2 yields, for any n ≥ 4, the existence of some C_n ∈_0 and a C_n-convex disk K_n such that â_n-1^C_n(K_n) + â_n+1^C_n(K_n) > 2 â_n^C_n(K_n). (2) To guarantee the required inequalities in Steps 1 and 2, we used the properties of the arcs of C_2 entirely contained in C^+_2 or C^-_2. Thus, if C_2' is an o-symmetric plane convex body containing C^+_2 and C^-_2 in its boundary, then we have â_3^C_2'(K_2) + â_5^C_2'(K_2) > 2 â_4^C_2'(K_2). We combine these two observations in the following remark. For any n ≥ 4, there is some C_n ∈_o and a C_n-convex disk K_n such that if any C_n' ∈_o contains C_n^+ and C_n^- in its boundary, where by C^+_n and C^-_n, we denote the parts of (C_n) contained in the closed half planes { x ≥ 0} and { x ≤ 0}, respectively, then K_n is C_n'-convex, and â_n-1^C_n'(K_n) + â_n+1^C_n'(K_n) > 2 â_n^C_n'(K_n). Step 3. Now we prove Theorem <ref>. Let n ≥ 4. Recall that ^n_a denotes the elements C of _o such that for any C-convex disk K, we have â_n-1^C(K) + â_n+1^C(K) ≤ 2 â_n^C(K), and set ^n_a = _o ∖^n_a. Observe that by Lemma <ref>, ^n_a is open. We show that it is everywhere dense in _o. Let C be an arbitrary element of _o and let ε > 0. Note that for any nondegenerate linear transformation h : ^2 →^2, K is C-convex if and only if h(K) is h(C)-convex, and for any n ≥ 4, if K is C-convex, then â_n^C(K) = â_n^h(C)(h(K)). Thus, without loss of generality, we may assume that there are vertical supporting lines of C meeting (C) at some points ± p of the x-axis. We choose our notation such that p is on the positive half of the axis. Consider the convex disk C_n ∈_0 in Remark <ref>. Let us define the nondegenerate linear transformation h_λ, μ : ^2 →^2 by h_λ,μ(x,y)=(λ x, μ y). Then, if we choose suitable sufficiently small values μ, λ > 0, then there is a translate C^+ of h_λ,μ(C^+_n), and an o-symmetric convex disk C' containing C^+ in its boundary such that C^+ ⊂ (C+ ε B^2) ∖ C, and C ⊂ C'. Then C' ∩ (C+ ε B^2) ∈_o contains translates of h_λ,μ(C^+_n) and h_λ,μ(C^-_n) in its boundary, the Hausdorff distance of C and C' is at most ε, and, if we set K'=h_λ,μ(K_n), by Remark <ref> we have â_n-1^C'(K') + â_n+1^C'(K') > 2 â_n^C'(K'). Thus, ^n_a is everywhere dense, which immediately yields that ⋂_n=4^∞^n_a is residual, implying Theorem <ref>. § REMARKS AND QUESTIONS For C ∈_o, K ∈ and positive integer n ≥ 3, let P̅_n^C(K) = inf{_C(Q) : Q is a convex n- gon circumscribed about K }; p̅_n^C(K) = sup{_C(Q) : Q is a convex n- gon inscribed in K }. As we have observed in the introduction, it is known <cit.> that for any C ∈_o and K ∈, the sequences {P̅_n^C(K) } and {p̅_n^C(K) } are convex and concave, respectively. Our approach yields a new proof of these statements by applying Theorem <ref> for λ C, where λ→∞. Applying Theorem <ref> for λ C with λ→∞, we obtain the following. Let C ∈_o, K ∈ and n ≥ 3. If, for some positive integer k, Let C ∈_o, K ∈, n ≥ 3 and k ≥ 2. Assume that k is a divisor of n and both K and C have k-fold rotational symmetry. Then there is a convex n-gon Q^P circumscribed about K with _C(Q^P)= P̅_n^C(K) such that Q^P has k-fold rotational symmetry. Similarly, there is a convex n-gon polygon Q^p inscribed in K which has k-fold rotational symmetry, and _C(Q^p)= p̅_n^C(K). In the remaining part of the paper, we denote the set (1,∞) ∪{∞} by [1,∞]. Let p,q ∈ [1,∞] satisfy the equation 1/p + 1/q = 1. For any K, L ∈, G. Fejes Tóth <cit.> introduced the weighted area deviation of K,L with weights p,q as the quantity ^p,q(K,L)=p (K ∖ L) + q (L ∖ K). He proved that if for any K ∈, a̅_K^C(n,p,q) denotes the minimal weighted area deviation of K and an arbitrary convex n-gon, then the sequence {a̅_K^C(n,p,q) } is convex. Based on this idea, we introduce the following quantity. Let p,q ∈ [1,∞] satisfy the equation 1/p + 1/q = 1, and let C ∈_0, K ∈_0. We call the quantity _C^p,q(K,L) = p ( _C((K) ∖(L))- _C((L) ∩ K) ) + + q ( _C((L) ∖(K)) - _C ((K) ∩ L) ) the weighted C-perimeter deviation of K,L with weights p,q. Here we note that by convexity, _C((K) ∖(L)) ≥_C((L) ∩ K) and _C((L) ∖(K)) ≥_C ((K) ∩ L), with equality if and only if K ⊆ L and L ⊆ K, respectively. Let p̅_K^C(n,p,q) denote the minimal C-perimeter deviation of K and an arbitrary convex n-gon. We remark that if K is C-convex, by replacing the convex n-gons in the definitions of a̅_K^C(n,p,q) and p̅_K^C(n,p,q) with C-n-gons, we may analogously define the quantities â_K^C(n,p,q) and p̂_K^C(n,p,q), respectively. This leads to the following problems. Prove or disprove that for any p,q ∈ [1,∞ ] with 1/p + 1/q = 1, C ∈_o and K ∈, the sequence {p̅_K^C(n,p,q) } is convex. Prove or disprove that for any p,q ∈ [1,∞ ] with 1/p + 1/q = 1, C ∈_o and C-convex disk K ∈, the sequence {p̂_K^C(n,p,q) } is convex. Does the same hold for {â_K^C(n,p,q) } if C is the Euclidean unit disk? Before our last problem, we remark that â_K^C(n,1, ∞) = (K) - â_K^C(n) and â_K^C(n,∞,1) = Â_K^C(n)-(K). Is there a value p_0 ∈ (1,∞) such that for any p with p_0 < p ≤∞ and q satisfying 1/p + 1/q = 1, for any C ∈_o and C-convex disk K ∈, the sequence {â_K^C(n,p,q) } is convex? Bambah R.P. Bambah and C.A. Rogers, Covering the plane with convex sets, J. London Math. Soc. 27 (1952), 304-314. BCC2006 K. Bezdek, R. Connelly and B. Csikós, On the perimeter of the intersection of congruent disks, Beiträge Algebra Geom. 47 (2006), 53-62. BL23 K. Bezdek and Z. Lángi, From the separable Tammes problem to extremal distributions of great circles in the unit sphere, Discrete Comput. Geom., DOI: 0.1007/s00454-023-00509-w BLNP K. Bezdek, Z. Lángi, M. Naszódi and P. Papez, Ball-polyhedra, Discrete Comput. Geom. 38 (2007), 201-230. ChDT R. Chernov, K, Drach and K. Tatarko, A sausage body is a unique solution for a reverse isoperimetric problem, Adv. Math. 353 (2019), 431-445. Dowker C.H. Dowker, On minimum circumscribed polygons, Bull. Amer. Math. Soc. 50 (1944), 120-122. Eggleston H.G. Eggleston, Approximation to plane convex curves. (I) Dowker-type theorems, Proc. London Math. Soc. (3) 7 (1957), 351-377. GFT G. Fejes Tóth, On a Dowker-type theorem of Eggleston, Acta Math. Sci. Hungar. 29 (1977), 131-148. GFTandLFT G. Fejes Tóth and L. Fejes Tóth, Remark on a paper of C. H. Dowker, Periodica Math. Hungar. 3 (1973), 271-274. TF2015 G. Fejes Tóth and F. Fodor, Dowker-type theorems for hyperconvex discs, Period. Math. Hungar. 70 (2015), 131-144. LFTSzeged L. Fejes Tóth, Some packing and covering theorems, Acta Sci. Math. (Szeged) 12/A (1950), 62-67. LFTperim L. Fejes Tóth, Remarks on polygon theorems of Dowker, Mat. Lapok 6 (1955), 176-179 (Hungarian). regfig L. Fejes Tóth, Regular Figures, Macmillan, New York, 1964. HSTV H. Huang, B.A. Slomka, T. Tkocz and B. Vritsiou, Improved bounds for Hadwiger’s covering problem via thin-shell estimates, J. European Math. Soc. 24 (2022), 1431–1448. JMR T. Jahn, H. Martini, and C. Richter, Ball convex bodies in Minkowski spaces, Pacific J. Math. 289(2) (2017), 287–316. LNT2013 Z. Lángi, M. Naszod́i and I. Talata, Ball and spindle convexity with respect to a convex body, Aequationes Math. 85 (2013), 41-67. MM22 A. Marynych and I. Molchanov, Facial structure of strongly convex sets generated by random samples, Adv. Math. 395 (2022), 108086. Mayer A.E. Mayer, Eine Überkonvexität, Math. Z. 39 (1935), 511-531. MSW H. Martini, K. Swanepoel and G. Weiss, The geometry of Minkowski spaces - a survey. Part I, Expo. Math. 19 (2001), 97-142 . Molnar J. Molnár, On inscribed and circumscribed polygons of convex regions, Mat. Lapok 6 (1955), 210-218 (Hungarian). Prosanov R. Prosanov, On a relation between packing and covering densities of convex bodies, Discrete Comput. Geom. 65 (2021), 1028–1037. Thompson A.C. Thompson, Minkowski geometry, Encyclopedia of Mathematics and Its Applications 63, Cambridge University Press, New York, USA, 1996. Vincensini P. Vincensini, Sur les figures superconvexes planes, Bull. Soc. Math. France 64 (1936), 197-208.
http://arxiv.org/abs/2307.03954v1
20230708112025
Magnon influence on the superconducting DOS in FI/S bilayers
[ "A. S. Ianovskaia", "A. M. Bobkov", "I. V. Bobkova" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall" ]
National Research University Higher School of Economics, Moscow, 101000 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia Moscow Institute of Physics and Technology, Dolgoprudny, 141700 Russia National Research University Higher School of Economics, Moscow, 101000 Russia Heterostuctures superconductor/ferromagnetic insulator (FI/S) are paradigmic systems for studying mutual influence of superconductivity and magnetism via proximity effects. In particular, spin-split superconductivity is realized in such structures. Recent experiments and theories demonstrate a rich variety of transport phenomena occurring in devices based on such heterostructures that suggest direct applications in thermoelectricity, low-dissipative spintronics, radiation detection and sensing. In this work we investigate the influence of the electron-magnon interaction at the superconductor/ferromagnetic insulator interface on the spin-split superconductivity. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed, and the BCS-like spin-split shape of the superconducting DOS, which is typical for superconductors in the effective exchange field, is strongly modified. An odd-frequency superconducting order parameter admixture to the leading singlet order parameter is also found. These findings expand the physical picture of spin-split superconductivity beyond the mean-field description of the ferromagnet exchange field. Magnon influence on the superconducting DOS in FI/S bilayers I.V. Bobkova August 12, 2023 ============================================================ § INTRODUCTION Long ago it was demonstrated that the exchange field of ferromagnetic insulators (FIs), such as EuS and EuO, can spin-split the excitation spectrum of an adjacent thin-film superconductor <cit.>. The spin splitting in the DOS observed in those experiments resembles the spin splitting created by a strong in-plane field applied to a thin superconducting film. This discovery opened up the way for performing spin-polarized tunneling measurements without the need of applying large magnetic fields. A renewed interest in studying ferromagnetic/superconductor (F/S) structures came with active development of superconducting spintronics <cit.>, caloritronics and spin caloritronics <cit.>. In particular, in F/S structures with spin-split density of states (DOS) a series of promising phenomena have been studied. Among them are giant thermoelectric <cit.>, thermospin effects <cit.>, highly efficient thermally-induced domain wall motion <cit.>, spin and heat valves <cit.>, cooling at the nanoscale <cit.>, low-temperature thermometry and development of sensitive electron thermometers <cit.>. The spin-split DOS in F/S structures has also been explored in the presence of magnetic inhomogeneities, such as textured ferromagnets and domain walls <cit.>. Characteristic signatures of equal-spin triplet pairing were reported <cit.>. It was shown that the characteristic spatial and energy dependence of the spin-dependent DOS allows to tomographically extract the structure of the spin-triplet Cooper pairs <cit.>. Furthermore, the influence of the domain structure on the position-averaged superconducting DOS in FI/S bilayer was studied <cit.>. Another important direction in the field of F/S hybrid structures is investigation of interplay between the superconducting state and ferromagnetic excitations - magnons. A series of interesting results, presumably related to the influence of the superconductor on the magnon spectrum have been reported. In particular, it was found that the adjacent superconductor works as a spin sink strongly influencing Gilbert damping of the magnon modes <cit.> and can result in shifting of k = 0 magnon frequencies (Kittel mode) <cit.>. The electromagnetic interaction between magnons in ferromagnets and superconductors also results in appearance of magnon-fluxon excitations <cit.> and efficient gating of magnons <cit.>. Further it was reported that the magnetic proximity effect in thin film F/S hybrids results in appearing of magnon-cooparons, which are composed of a magnon in F and an accompanying cloud of spinful triplet pairs in S <cit.>. Some aspects of back influence of magnons on superconducting state have already been investigated. For example, a possible realization of the magnon-mediated superconductivity in F/S hybrids has been proposed <cit.>. At the same time, the influence of magnons via the magnetic proximity effect on the superconducting DOS practically has not yet been studied, although the electron-magnon interaction and influence of this interaction on the DOS in ferromagnetic metals have been investigated long ago <cit.>. Here we consider how the effects of electron-magnon interactions in FI/S thin-film hybrids manifest themselves in the superconducting DOS and quasiparticle spectra of the superconductor. It is found that the magnon-mediated electron spin-flip processes cause the interaction and mixing of the spin-split bands resulting in their reconstruction, which is especially important near the edge of the superconducting gap. We demonstrate that the classical BCS-like Zeeman-split shape of the superconducting DOS can be strongly modified due to the electron-magnon interaction and this modification is temperature-dependent. The influence of magnons on the temperature dependence of the Zeeman splitting of the DOS and relevance of our findings to existing and future experiments are also discussed. The paper is organized as follows. In Sec. <ref> we describe the system under consideration and the Green's functions formalism taking into account magnon self-energies. In Sec. <ref> the modifications of the quasiparticle spectra in the superconductor due to the electron-magnon coupling are discussed. In Sec. <ref> we study signatures of the electron-magnon interaction in the Zeeman-split superconducting DOS and their temperature dependence. Our conclusions are summarized in Sec. <ref>. § SYSTEM AND FORMALISM We consider a thin-film bilayer as depicted in Fig. <ref>, in which a ferromagnetic insulator FI is interfaced with a conventional spin-singlet s-wave superconductor S. The thickness of the S layer d_S is assumed to be small as compared to the superconducting coherence length ξ_S. In this case the S layer can be considered as homogeneous along the normal to the interface plane. The FI layer in its ground state is magnetized in-plane, along the z-direction. The Hamiltonian of the system takes the form: Ĥ=Ĥ_S+Ĥ_FI+Ĥ_ex, where Ĥ_S is the standard mean-field BCS Hamiltonian describing electrons in the superconducting film: Ĥ_S = ∑_ k σξ_ k c_ k σ^† c_ k σ - ∑_ kΔ c_ k↑^† c_- k↓^† - ∑_ kΔ^* c_- k↓ c_ k↑ . ξ_ k = k^2/2m - μ is the normal state kinetic energy of the electrons in the S layer, counted from the chemical potential of the superconductor μ. Δ is the superconducting order parameter in S, which assumed to be of conventional isotropic s-wave type. c_ k σ^+ and c_ k σ are creation and annihilation operators of electrons with the wave vector k and spin σ. Ĥ_FI describes magnons in the FI. Assuming easy-axis magnetic anisotropy in the FI it can be written as Ĥ_FI = ∑_ q (ω_0 + D q^2) b_ q^† b_ q, where b_ q^+ and b_ q are creation and annihilation operators of magnons in FI with wave vector q, ω_0 = |γ| (μ_0 H_0 + 2 K_a/M_s) is the magnonic frequency at q=0, D is the magnon stiffness constant, γ is the typically negative gyromagnetic ratio, M_s is the saturation magnetization, μ_0 is the permeability of free space, K_a is the easy-axis anisotropy constant and H_0 is the external field (can be equal to zero in our consideration). Electronic and magnonic wave vectors k and q are assumed to be two-dimensional (2D), that is the electrons and magnons can only propagate in plane of the FI/S interface. The wave functions along the y-direction, perpendicular to the interface, are assumed to be quantized. For simplicity, in the formulas we leave only one transverse magnon mode. In fact, we have checked that different modes give quantitatively different, but qualitatively the same contributions to considered self-energies. Their effect can be accounted for by multiplying our results for the self-energy corrections by an effective number of working transverse modes (see below). Ĥ_ex accounts for the exchange interaction between S and FI: Ĥ_ex = -J∫ d^2 ρ S_FI(ρ) s_e(ρ) , where ρ is a two-dimensional radius-vector at the interface plane, S_FI and s_e are the spin density operators in the FI and S, respectively. J is the interface exchange constant. By performing the Holstein-Primakoff transformation to the second order in the magnonic operators in Eq. (<ref>) one obtains Ĥ_ex = Ĥ_1 + Ĥ_2 + Ĥ_3, with Ĥ_1 = ∑_ k, k' U_ k, k'(c_ k, ↑^† c_ k', ↑-c_ k,↓^† c_ k',↓) , U_ k, k' = JM_s/2|γ|∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ), Ĥ_2 = ∑_ k, k', q, q' T_ k, k', q, q' b_ q^† b_ q' (c_ k, ↑^† c_ k', ↑-c_ k,↓^† c_ k',↓), T_ k, k', q, q' = - J/2∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ) ϕ_ q^*(ρ) ϕ_ q'(ρ), Ĥ_3 = ∑_ k, k', q V_ k, k', q (b_ q c_ k, ↑^† c_ k', ↓ + b_ q^† c_ k', ↓^† c_ k, ↑), V_ k, k', q = J √(M_s/2|γ|)∫ d^2 ρΨ_ k^*(ρ) Ψ_ k'(ρ) ϕ_ q(ρ) , where Ĥ_1 describes a spin-splitting of the electronic energy spectrum in S in the mean-field approximation. The second term Ĥ_2 represents the Ising-term, which physically accounts for the renormalization of the spin-splitting by magnonic contribution. Since the processes of the spin transfer between electrons and magnons are of primary importance for our consideration, when calculating the electronic Green's function we simplify this term by substituting the magnon operator b_ q^† b_ q by its averaged value ⟨ b_ q^† b_ q⟩ = n_ qδ_ q q', where n_ q is the density of magnons with wave vector q. The third term Ĥ_3 transfers spin between electron and magnon operators and will turn out to be the most significant for effects under consideration. If we choose the wave functions of electrons Ψ_ k(ρ) and magnons ϕ_ q(ρ) at the interface in the form of plane waves propagating along the interface, that is Ψ_ k(ρ)=(1/√(d_S))e^i k ρ and ϕ_ q(ρ)=(1/√(d_FI))e^i q ρ, then Ĥ_ex can be simplified: Ĥ_ex = Ũ∑_k (c_k, ↑^† c_k, ↑-c_k,↓^† c_k,↓) + V ∑_k, q (b_q c_k, ↑^† c_k-q, ↓ + b_q^† c_k-q, ↓^† c_k, ↑) , where Ũ = -J (M_s-N_m |γ|)/(2|γ|d_S ) is the averaged spin-splitting field in the superconductor renormalized by the magnon density N_m, and V = J√(M_s/2|γ|d_FI A)(1/d_S) is the electron-magnon coupling constant, where A is the area of the FI/S interface. Introducing the following Nambu-spinor Ψ̌_ k = (c_ k ↑, c_ k ↓, -c_- k ↓^†, c_- k ↑^†)^T, we define the Gor'kov Green's function in the Matsubara representation Ǧ_ k(τ) = -⟨ T_τΨ̌_ kΨ̌_ k^†⟩, where ⟨ T_τ ... ⟩ means imaginary time-ordered thermal averaging. Turning to the Matsubara frequency representation the Green's function obeys the following equation: (iω - ξ_k τ_z - Ũσ_z - Δτ_x - Σ̌_m )Ǧ_ k (ω) = 1, where ω is the fermionic Matsubara frequency, σ_i and τ_i (i=x,y,z) are Pauli matrices in spin and particle-hole spaces, respectively. Σ̌_m is the magnonic self-energy, which describes corrections to the electronic Green's function due to the electron-magnon interaction and in the framework of the self-consistent Born approximation takes the form: Σ̌_m = - V^2 T ∑_ q,Ω B_ q(Ω) {σ_+ Ǧ_ k- q (ω - Ω)σ_- + . . σ_- Ǧ_ k+ q (ω + Ω)σ_+} , where σ_± = (σ_x ± i σ_y), Ω is the bosonic Matsubara frequency and B_ q(Ω) = [iΩ - (ω_0+Dq^2)]^-1 is the magnonic Green's function. From the spin structure of Eq. (<ref>) it is seen that Σ̌_m is diagonal in spin space. For this reason the electronic Green's function, which is given by the solution of Eq. (<ref>) is also diagonal matrix in spin space and Eq. (<ref>) can be written for the both spin subbands separately: (iω - ξ_k τ_z - σŨ - Δτ_x - Σ̂_m, σ )Ĝ_ k, σ (ω) = 1, where Ĝ_ k, σ is 2 × 2 matrix in the particle-hole space corresponding to the electron spin σ = ↑, ↓. Σ̂_m,σ is also 2 × 2 matrix in the particle-hole space representing the magnonic self-energy for the given spin subband σ: Σ̂_m,σ = - V^2 T ∑_ q,Ω B_ q(Ω) Ĝ_ k-σ q, σ̅ (ω - σΩ). As a factor in the expressions σ means ± 1 for the spin-up (spin-down) subbands, and σ̅ means the opposite spin subband. The dimensionless coupling constant quantifying the strength of the electron-magnon coupling is K=V^2 A / 4 πħ v_F √(D Δ). Our numerical estimates made for the parameters corresponding to EuS/Al or YIG/Nb structures suggest that K should be rather small, K ≪ 1, for the detailed discussion of the numerical estimates see Sec. <ref>. The smallness of the electron-magnon coupling constant allows us to use non self-consistent Born approximation when calculating magnon self-energy. That is, we substitute Ĝ_ k - σ q, σ̅ by the bare superconducting Green's function obtained without taking into account the magnon self-energy Ĝ_ k - σ q, σ̅^(0) in Eq. (<ref>). Then the explicit solution of Eq. (<ref>) takes the form: Ĝ_ k,σ (ω) = i ω_ k, σ +ξ_ k, στ_z + Δ_ k, στ_x/(i ω_ k, σ)^2 - (ξ_ k, σ)^2 - (Δ_ k, σ)^2 . where all the quantities marked by are renormalized by the electron-magnon interaction as follows: Δ_ k, σ (ω) = Δ + δΔ_ k,σ(ω) = Δ - - V^2 T ∑_ q, Ω B_ q(Ω) Δ/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 , ξ_ k, σ (ω) = ξ_ k + δξ_ k,σ(ω)= ξ_ k - - V^2 T ∑_ q, Ω B_ q(Ω) ξ_ k-σ q/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 , ε_ k, σ (ω) = i ω - Uσ + δε_ k,σ(ω)= i ω - Uσ + + V^2 T ∑_ q, Ω B_ q(Ω) i ω - iσΩ +Uσ/(i ω - iσΩ +Uσ)^2 - ξ^2_ k-σ q - |Δ|^2 . For the problem under consideration all the in-plane directions of k are equivalent. For this reason the magnonic corrections only depend on the absolute value k of the wave vector. Further in order to study the quasiparticle spectra and density of states we turn from Matsubara frequencies to the real energies in the Green's functions i ω→ε + i δ, where δ is an infinitesimal positive number. The magnonic corrections for spin-up electrons δΔ_ k, ↑, δξ_ k, ↑ and δε_ k, ↑ are presented in Figs. <ref>-<ref> as functions of the quasiparticle energy ε and ξ_ k≡ξ, which after linearization in the vicinity of the Fermi surface takes the form ξ_ k ≈v_F ( k - k_F). The key features of the corrections, which can be see in the presented plots are: (i) The dependence of the corrections on ξ is very weak. The reason is that the most important range of the magnonic wave numbers contributing to the corrections is q ≲ 1/ξ_S, where ξ_S = v_F/Δ is the superconducting coherence length. Then taking parameters of the magnon spectrum corresponding to YIG ω_0,YIG∼ 10^-1Δ, D_YIG≈ 5*10^-40J*m^2 or EuS ω_0,EuS∼ 10^-2Δ, D_EuS≈ 3*10^-42J*m^2, we obtain that D q^2 ≪ω_0 to very good accuracy for all reasonable parameters. Consequently, one can disregard D q^2 with respect to ω_0 in the magnonic Green's function B_ q and after linearization of ξ_ k - σ q≈v_F ( k - σ q - k_F) in the vicinity of the Fermi surface we see that the dependence on k drops from Eqs. (<ref>)-(<ref>). (ii) The correction to the normal state electron dispersion δξ is small with respect to all other corrections and is neglected below. (iii) The important corrections δΔ and δε have peaks at the energies corresponding to the superconducting coherence peaks of the opposite spin subbands. While the coherence peaks for the spin-up subband are located at ε = ±Δ +Ũ, the peaks of the corrections are at ε = ±Δ -Ũ. It is an obvious consequence of the process of electron spin flip accompanied by emission or absorption of a magnon. (iv) Correction δΔ represents an effective contribution to the superconducting order parameter induced from the pure singlet pairing Δ via the electron-magnon interaction. It depends on the Matsubara frequency and contains both singlet and triplet components. As can be seen from Eq. (<ref>), the correction obeys the condition δΔ_↑(ω) = δΔ_↓(-ω). It means that the triplet component δΔ_t (ω) = δΔ_↑(ω) - δΔ_↓(ω) = -δΔ_t(-ω) works as an effective odd-frequency superconducting order parameter. This situation is rather unusual because typically in F/S hybrid systems we encounter an odd-frequency anomalous Green's function, but at the same time the order parameter is still even frequency in the framework of the conventional BCS weak coupling theory. § QUASIPARTICLE SPECTRA Now we turn to discussion of how quasiparticle spectra in the S layer are modified by the electron-magnon interaction. In Fig. <ref>(a) we present the spectral functions for the both spins in the S layer calculated from the Green's function (<ref>) according to the relation A_σ(ε, k) = -1/π Tr{1+τ_z/2 Im[Ĝ_ k,σ^R(ε)]}. The spectral function is isotropic in momentum space and for this reason we plot it as a function of ξ_ k≡ξ. The electron-like and hole-like quasiparticle branches are clearly seen at positive and negative energies, respectively. Black dashed lines represent the quasiparticle spectra in the absence of the electron-magnon interaction. The electron-magnon interaction leads to the following main modifications of the quasiparticle spectra: (i) The Zeeman splitting of spin-up and spin-down quasiparticle branches is reduced due to the magnon-mediated interaction between quasiparticles with opposite spins. (ii) For positive energy branches, corresponding to electron-like quasiparticles, the lifetime of spin-up quasiparticles and quasiparticles at the upper part of the spin-down branch is considerably suppressed, what is seen as a broadening of the corresponding branches. For negative energies, corresponding to hole-like quasiparticles, the situation is symmetric if we interchange spins. The broadening of the spin-down branch only occurs in the energy region, where the spin-up branch also exists. The physical reason is that the spin-flip processes providing the broadening are nearly horizontal due to the fact that ω_0 + Dq^2 ≪Δ, that is the magnon energies are small as compared Δ in the whole range of ξ, considered in Fig. (<ref>). The lower (upper) part of the spin-down (up) positive (negative) energy branch is not broadened because there are no available states for the opposite spin quasiparticles at the appropriate energies and, consequently, the spin-flip processes are not allowed. (iii) In Fig. <ref>(a) we also see a reconstruction of the spin-down spectral branch in the energy range of the bottom of the spin-up branch. In order to investigate this effect in more detail we plot the same figure on a logarithmic scale in Fig. <ref>(b), what allows to clearly see weak spectral features. Figs. <ref>(c) and (d) represent the spectral functions for the spin-up band on the normal and on the logarithmic scale, respectively. From Figs. <ref>(b) and (d) it is seen that due to the electron-magnon interaction in the energy region of the extremum of the spin-up (down) branch, a nonzero density of states appears for the opposite spin branch. It looks like a horizontal line starting from the bottom of the corresponding branch. This line is horizontal due to the independence of the electron-magnon self-energy corrections (<ref>) and (<ref>) on ξ. This mixing of the spin-up and spin-down bands resulting from the magnon-mediated spin-flip processes is natural and exists at all energies, but the spectral weight of the opposite spin branch is too small except for the regions of the extrema of the bands corresponding to the coherence peaks of the superconducting DOS. Intersection of the additional lines with the original spin-down band results in its reconstruction, which looks like an avoided crossing point. The results for the spectral function presented and discussed above correspond to T=0.1Δ. This temperature is higher than the gap in the magnonic spectrum ω_0=0.03Δ, which we take in our calculations. Therefore, a large number of thermal magnons are excited at this temperature. In Fig. <ref> the spectral function is demonstrated for lower temperature T=0.01Δ<ω_0. It is seen that the characteristic signatures of the magnon-mediated spin-flip processes, that is the mixing, reconstruction and broadening of the branches are much less pronounced due to the suppression of the thermally excited magnons at such low temperatures. § DOS IN THE PRESENCE OF MAGNONS Now we turn to discussion of the local density of states (LDOS) in the S layer, which is calculated as the momentum integrated spectral function: N(ε) = ∫d^2k/(2π)^2 A(ε, k). Fig. <ref>(a) demonstrates the LDOS in the presence of electron-magnon interaction (solid line) as compared to the LDOS calculated at V=0 (dashed line). The LDOS at V=0, that is calculated assuming mean-field approximation for the exchange field, takes the conventional BCS-like shape. It manifests Zeeman-split coherence peaks, and the outer peak is always higher than the inner one. The electron-magnon interaction inverts the relative ratio of the peak heights and broadens the outer peaks, while the width of the inner peaks remains unchanged. The reason is the same as for the broadening of the spectra in Fig. <ref>: electron spin-flip processes accompanied by a magnon emission or absorption. The outer coherence peaks in Fig.<ref>(a) correspond to the energy regions of the bottom (top) of the positive(negative)-energy spin-up(down) bands. This type of broadening, which only affects outer peaks, differs from the other physical mechanisms resulting in the broadening of the coherence peaks, such as the orbital effect of the magnetic field, inelastic scattering or magnetic impurities, which affect all the peaks <cit.> and can be roughly described by the Dynes parameter. The other important manifestation of the electron-magnon interaction is that the shape of the LDOS strongly depends on temperature even at very low temperatures ∼ω_0 ≪Δ, in agreement with the discussed above behavior of the spectral function. The temperature evolution of the LDOS is presented in Fig. <ref>. It is seen that the broadening of the outer peak develops with increasing temperature in the temperature range ∼ω_0. It is clear if we remember that the broadening is caused by the spin-flip processes, which are mediated by the thermally excited magnons. We do not consider larger temperatures T ≫ω_0 comparable to the critical temperature of the superconducting film because in this temperature range the temperature dependence of the superconducting gap comes into play and the correct consideration of the problem requires solving of the self-consistency equation for the order parameter. Now let us discuss numerical estimates of the dimensionless constant K=V^2 A / 4 πħ v_F √(D Δ), which controls the strength of the electron-magnon coupling. Substituting V = J√(M_s/2|γ|d_FI A)(1/d_S) and expressing the interface exchange coupling constant via the experimentally accessible quantity Ũ as |J| = 2 |γ| Ũ d_S/M_s (where to the leading approximation we neglect magnonic contribution to the magnetization), we obtain K = Ũ^2 (2|γ|/M_s) 1/(4 π√(DΔ)v_F d_FI) for one transverse magnon mode. The effective number of working transverse modes N_⊥∼ d_FI/a, where a is the interatomic distance in the ferromagnet. According to our estimates for d_FI≈ 10 nm N_⊥∼ 2 ÷ 5. One can take the following parameters for YIG/Nb heterostructures: Ũ/Δ = 0.5, v_F = 10^6m/s, Δ_Nb = 2.7*10^-22J, a=1.2m, 2|γ|/M_s = 3.3*10^-27m^3, D = D_bare,YIG-δ D_YIG, where D_bare,YIG = 5*10^-40J*m^2<cit.> is the exchange stiffness of YIG and δ D_YIG is the renormalization of the stiffness in FI/S bilayers due to formation of magnon-Cooparon quasiparticles <cit.>. As it was predicted <cit.>, for the material parameters of YIG/Nb heterostructures δ D_YIG can be ∼ (0.5 ÷ 1) D_YIG,bare for d_FI∼ (1 ÷ 0.5) d_S. Therefore, the electron-magnon coupling constant for YIG/Nb heterostructures can vary in a wide range K_YIG/Nb≳ 10^-4. The considered here values K ∼ 0.01 can be realized in the regime of strong renormalization of the exchange stiffness constant D. For EuS/Al heterostructures one can take Ũ/Δ = 0.25 <cit.>, v_F = 10^6m/s, Δ_Al = 3.5*10^-23J, a=10^-10m, 2|γ|/M_s = 3.3*10^-28m^3, D = D_bare,EuS, where D_bare,EuS = 3*10^-42J*m^2<cit.>. The superconducting renormalization of the stiffness due to formation of magnon-Cooparon quasiparticles is predicted to be small for the parameters corresponding to EuS/Al heterostructures at reasonable thicknesses d_FI due to smaller values of Δ and larger M_s. Substituting these parameters to the expression for K we come to the conclusion that for EuS/Al heterostructures K_EuS/Al∼ 10^-7÷ 10^-6, that is the electron-magnon effects unlikely to be observed in such structures. In general, the electron-magnon effects in the LDOS and quasiparticle spectra should be more pronounced in ultra-thin superconducting films with high critical temperatures, where large absolute values of the effective exchange field Ũ can be realized. The smaller values of the exchange stiffness of the ferromagnet will also enhance the effect. The manifestations of the electron-magnon coupling become more pronounced at T ≳ω_0 and grow with temperature. Now we discuss the influence of the electron-magnon interaction on the effective Zeeman splitting, which is defined as the distance between the split coherence peaks of the LDOS divided by 2. Experimentally, the low-temperature reduction of the effective Zeeman splitting at T ≪Δ for EuS/Al heterostructures has been reported <cit.>. It was ascribed to the presence of weakly bound spins at the interface of the EuS/Al. The renormalization of the effective exchange field in the superconductor by the thermal magnons can also contribute to this effect. Indeed, the fit of experimentally observed temperature dependence of the distance between the Zeeman-split coherence peaks Δ V_peak(T) by 2|Ũ| = J (M_s-N_m |γ|)/(2|γ|d_S ) with the magnon density N_m = (1/S d_FI)∑_ q{exp[-(ω_0+Dq^2)/T]-1}^-1 and ω_0 ≈ 0.03K is in reasonable agreement with the experimental data. In addition, the broadening of the outer coherence peaks, predicted in this work, leads to enhancement of the distance between the spin-split coherence peaks. The broadening becomes stronger with increasing temperature. This effect leads to an apparent growth of the peaks splitting with temperature and, therefore, acts opposite to the renormalization of the effective Zeeman field by magnons. However, our numerical estimates suggest that the temperature growth is unlikely to be observed, at least for heterostructures, consisting of the materials discussed above, because the renormalization of the effective Zeeman field by magnons dominates. § CONCLUSIONS In this work the influence of the electron-magnon interaction at the superconductor/ferromagnetic insulator interface in thin-film FI/S heterostructures on the spectrum of quasiparticles and the LDOS in the superconducting layer is studied. It is predicted that due to the magnon-mediated electron spin-flip processes the spin-split quasiparticle branches are partially mixed and reconstructed. The reconstruction is the most pronounced in the region of the bottom of the energetically unfavorable spin band because of the enhanced density of the electronic states and existence of the available states in the opposite-spin band. The BCS-like Zeeman-split shape of the superconducting DOS, which is typical for superconductors in the effective exchange field, is strongly modified due to the electron-magnon interaction. The outer spin-split coherence peaks are broadened, and the inner peaks remain intact. This type of broadening is a clear signature of the magnon-mediated spin flips and strongly differs from other mechanisms of the coherence peaks broadening, which usually influence all peaks. The broadening grows with temperature due to the thermal excitation of magnons. The described above features in the electronic DOS are mainly caused by diagonal in the particle-hole space magnonic contributions to the electron self-energy, that is by the quasiparticle processes. Besides that we have also found an off-diagonal in the particle-hole space magnonic contribution to the electronic self-energy. It mimics an odd-frequency superconducting order parameter admixture to the leading singlet order parameter. The study of its influence on the superconducting properties of the system may be an interesting direction for future research. § ACKNOWLEDGMENTS We acknowledge the discussions of the exchange interaction hamiltonian with Akashdeep Kamra. The work was supported by the Russian Science Foundation via the RSF project No. 22-42-04408.
http://arxiv.org/abs/2307.04158v1
20230709121449
Four-loop splitting functions in QCD -- The gluon-to-quark case
[ "G. Falcioni", "F. Herzog", "S. Moch", "A. Vogt" ]
hep-ph
[ "hep-ph" ]
T> c M>c <23.0cm 16.5cm -0.1cm -0.1cm -1.5cm 4colour#1#1#11 mu /#1 #13 mu /#1
http://arxiv.org/abs/2307.05294v3
20230711143832
Variability of the slow solar wind: New insights from modelling and PSP-WISPR observations
[ "Nicolas Poirier", "Victor Réville", "Alexis P. Rouillard", "Athanasios Kouloumvakos", "Emeline Valette" ]
astro-ph.SR
[ "astro-ph.SR", "physics.space-ph" ]
On the variability of the slow solar wind Poirier N. et al. (2023) Rosseland Centre for Solar Physics, University of Oslo, Postboks 1029 Blindern, N-0315 Oslo, Norway Institute of Theoretical Astrophysics, University of Oslo, Postboks 1029 Blindern, N-0315 Oslo, Norway IRAP, Université Toulouse III - Paul Sabatier, CNRS, CNES, Toulouse, France The Johns Hopkins University Applied Physics Laboratory, 11101 Johns Hopkins Road, Laurel, MD 20723, USA We analyse the signature and origin of transient structures embedded in the slow solar wind, and observed by the Wide-Field Imager for Parker Solar Probe (WISPR) during its first ten passages close to the Sun. WISPR provides a new in-depth vision on these structures, which have long been speculated to be a remnant of the pinch-off magnetic reconnection occurring at the tip of helmet streamers. We pursued the previous modelling works of <cit.> that simulate the dynamic release of quasi-periodic density structures into the slow wind through a tearing-induced magnetic reconnection at the tip of helmet streamers. Synthetic WISPR white-light (WL) images are produced using a newly developed advanced forward modelling algorithm that includes an adaptive grid refinement to resolve the smallest transient structures in the simulations. We analysed the aspect and properties of the simulated WL signatures in several case studies that are typical of solar minimum and near-maximum configurations. Quasi-periodic density structures associated with small-scale magnetic flux ropes are formed by tearing-induced magnetic reconnection at the heliospheric current sheet and within 3-7 R_⊙. Their appearance in WL images is greatly affected by the shape of the streamer belt and the presence of pseudo-streamers. The simulations show periodicities on ≃90-180 min, ≃7-10 hr, and ≃25-50 hr timescales, which are compatible with WISPR and past observations. This work shows strong evidence for a tearing-induced magnetic reconnection contributing to the long-observed high variability of the slow solar wind. Variability of the slow solar wind: New insights from modelling and PSP-WISPR observations. Nicolas Poirier1,2 Victor Réville3 Alexis P. Rouillard3 Athanasios Kouloumvakos4 Emeline Valette3 August 12, 2023 ========================================================================================================================= § INTRODUCTION In contrast to the fast solar wind, a mystery remains on the origin of the slow solar wind (SSW) and of its high variability. This variability can be the result of time-dependent and/or spatial-dependent effects. The spatial-dependent variability often emerges in structured bundles of bright rays in coronal white-light (WL) emissions, that have long been observed from coronagraphs and heliospheric imagers such as the Solar and Heliospheric Observatory <cit.> and the Solar TErrestrial RElations Observatory <cit.>. Recently, the Wide-Field Imager for Parker Solar Probe <cit.> revealed a finer structuring of the slow solar wind at scales down to the thin heliospheric plasma sheet (HPS) <cit.>. On the other hand, it has been shown that the SSW is also highly time dependent, by hosting up to ≈ 80% of quasi-periodic structures <cit.>. This paper focus on the time-dependent variability of the slow wind, as captured from the WISPR novel perspective and in light of state-of-the-art modelling. Density transient structures that propagate along with the SSW have long been observed in white-light imagery, with a great variety of shapes, speed, and origins. Among the most evident are coronal mass ejections (CMEs), which by releasing tremendous amount of coronal material into the heliosphere, generate significant brightness enhancements in both coronagraph and heliospheric images <cit.>. Some CME events that undergo more moderate and progressive accelerations, called streamer blowouts, have been particularly observed to deflect towards the cusp of helmet streamers and further propagate within the SSW <cit.>. Since the beginnings of the SoHO-LASCO coronagraph, other CME-like flux rope structures known as the Sheeley blobs <cit.> have been observed to propagate along the bright rays associated with streamer stalks where the densest slow wind originates. Early interpretations suggested that these structures formed as a result of a pinch-off reconnection at the tip of helmet streamers that would have been stretched out beforehand <cit.>; the conditions leading to this stretching and eventually to the reconnection are still unclear. On some occasions they also appear as bright arches, which may be more or less `squashed' depending on their inclination with respect to the observer <cit.>. Large loops from active regions (ARs) have also been observed to leave arch-like signatures as they gradually expand into the corona <cit.>. A helmet streamer made of such expanding loops may then be prone to stretching, and to the formation of streamer blobs via the pinch-off reconnection scenario. This picture has the advantage of also being consistent with observations of plasma inflows in LASCO <cit.>, which were associated for the first time with outflowing blobs later on <cit.>. The continuous tracking of blobs expelled from the tip of helmet streamers all the way to points of in situ measurements reveals that they transport helical magnetic fields <cit.>, which is further supported by recent Parker Solar Probe (PSP) observations <cit.>. More systematic statistical analyses based on STEREO images, and of in situ measurements inside the HPS revealed that the topology of blobs is consistent with magnetic flux ropes <cit.> that could form via magnetic reconnection at the tip of helmet streamers <cit.>. The modelling of streamer instabilities in time-dependent magnetohydrodynamics (MHD) simulations also supports this scenario <cit.>. Following in the footsteps of these models, <cit.> investigated in detail the tearing instability that occurs near the cusp of streamers, in a high-resolution 2.5D simulation of the corona and using an idealistic dipolar configuration of the solar magnetic field. Streamer blobs were reproduced in addition to a plethora of quasi-periodic structures over a wide range of frequencies. Past studies based on near 1 AU remote-sensing and/or in situ observations also reveal the existence of quasi-periodic structures with periodicities varying from ≈ 90-180 min to ≈ 8-16 hr <cit.>, which were found in Helios <cit.> and in recent PSP observations as well <cit.>. From a high-cadence campaign on the STEREO-A COR-2 coronagraph, <cit.> recently revealed the ubiquitous presence of density fluctuations at even smaller scales ≈ 20-40-60 min. In addition to the density and magnetic field, other plasma properties have also been measured to vary during the passage of these transients. For instance, <cit.> showed a similarity between the variability of the charge state ratios measured in situ in the slow wind and the short hourly timescale of the quasi-periodic structures observed remotely in WL streamers. This SSW originating from streamers tends to exhibit high charge state ratios typical of hot ARs, whereas the SSW that emerges farther away from streamers, probably from deeper inside coronal holes is characterised by lower charge-state ratios comparable to those measured in the fast wind <cit.>. The streamer-like SSW is also known to be more enriched in heavy ions having a low first ionisation potential (FIP) <cit.>, a composition typical of closed-field plasma from ARs <cit.>. The streamer-like SSW, or at least its dynamic component, could hence be conveniently interpreted as originating from the pinch-off reconnection mechanism by offering a channel through which closed-field material can be intermittently released into the slow wind. This paper further investigates this scenario through a qualitative comparison between the recent highly resolved WL observations taken by WISPR, and high-resolution simulations of the solar corona and solar wind. We first analyse in section <ref> two events observed by WISPR that depict quasi-periodic structures. We then present in Sect. <ref> our modelling approach to reproduce such structures through the tearing-induced reconnection at the tip of streamers. Synthetic WISPR images are then produced and compared against observations in Sect. <ref>. Limitations and future perspectives on this work are discussed in Sect. <ref>. We finally conclude on the possible implications of this work for the understanding of the slow solar wind in Sect. <ref>. § OBSERVATIONS After its first 11 successful encounters WISPR has already provided a wealth of images rich in structures that were often unresolved from typical 1 AU observatories <cit.>. This is a direct benefit of bringing an imager so close to the Sun, to a vantage point that is located inside the corona. By drastically shortening the line-of-sight integration path, WISPR is able to resolve with unprecedented detail the density structures that propagate within the slow solar wind. WISPR consists of two WL heliospheric imagers that are mounted on the ram side of PSP, and so the solar wind structures can be imaged prior to their in situ measurement <cit.>. WISPR offers a large field of view (FOV) thanks to its two telescopes that cover in elongation angle (ϵ, angle away from the Sun) 13.5-53.0^∘ for the inner telescope (WISPR-I) and 50.5-108.5^∘ for the outer telescope (WISPR-O). At the closest approach to be reached by PSP in 2024 (9.86 R_⊙), WISPR-I will be able to observe the corona from only 2.3 R_⊙. We exploit WISPR level 3 images,[data source: <https://wispr.nrl.navy.mil/wisprdata>] which have been calibrated <cit.> and where contributions to the white-light emissions by dust particles (i.e. the F-corona) have been subtracted to reveal only the faint K-corona made up of coronal electrons <cit.>. §.§ First insights on the transients observed by WISPR WISPR has detected a wealth of fluctuations in the slow wind, whose signatures can be very diverse. Among these fluctuations many have been associated with magnetic flux ropes with a clear dark cavity, suggesting that these flux ropes were likely observed edge-on or at a small inclination angle. We present two examples of such signatures in Fig. <ref>, captured by the inner telescope WISPR-I during the eighth PSP encounter. Two bright shells (green arrows) can be seen, one circular (bottom panel) and the other more V-shaped (top panel). Such shapes have already been observed from 1 AU, albeit to a larger spatial extent. They have been especially captured in great detail in events associated with pristine slow CMEs observed by WISPR. The V-shape was interpreted either as a slight inclination of the flux rope with respect to the line of sight (LOS) of the observer or as a byproduct of the reconnection process itself that generates the flux rope <cit.>. The fact that the brightness enhancement is often more marked at the back end of the flux rope supports the latter scenario, by an accumulation of plasma from the reconnection exhaust. A closer look at the bottom panel of Fig. <ref> shows an inner circular structure (yellow arrows) that has also been detected in other transients observed by WISPR <cit.>. Most signatures of this scale have been found to be associated with sporadic (slow) CME events, where flux ropes are already present low in the corona well below the tip of streamers <cit.>. In this paper we focus on flux ropes that form on a regular basis just above the tip of helmet streamers, which are presumed to be major contributors to the variability of the slow wind. Compared to past near 1 AU observations, the novelty of WISPR observations is in imaging streamers from much closer in, providing clearer signatures of its embedded transients and access to smaller scales. In figure <ref> we show two events that may be related to quasi-periodic structures captured by WISPR-I during the 8th (top panel) and 11th (bottom panel) PSP encounters, hereafter referred to as the April 2021 and March 2022 events, respectively. We were able to identify a series of other similar events in WISPR-I images, but we only selected here the most visible ones for illustrative purposes, particularly in the most recent WISPR-I observations. Although more difficult to interpret because of a higher solar activity, they reveal local features that clearly stand out from the background signal. The top panel unveils a track of three small-scale structures. Similarly to the larger-scale events shown previously in Fig. <ref>, they also appear as bright annuli suggesting flux ropes seen edge-on (yellow arrows). At this point it is hard to say whether these structures are actual small-scale flux ropes or if they are located far away from WISPR, which we discuss below. In contrast, the bottom panel shows arch-like signatures (orange arrows). Similar signatures have already been observed from 1 AU, and have been related to flux ropes seen with a greater inclination angle or almost face-on <cit.>, or also to expanding AR loops <cit.>. In the latter case though, the expansion of the loops is much slower than the propagation speed of the transients captured by WISPR (see the fitting performed in Sect. <ref>). Similarly to the April 2021 event shown in the top panel, a close-up visual inspection of the March 2022 event also reveals consecutive arches following each other. Both these events show interesting periodic behaviour in their spatial distribution, and hence they may be connected to the above-mentioned 90-180 min quasi-periodic structures that have been previously detected in the slow wind; this is discussed further in Sect. <ref>. Finally, we can give a rough estimate of the brightness variation induced by the passage of these transients. For this purpose, we examined the pixel values as given in units of mean solar brightness (B_⊙) in the level 3 WISPR-I .fits files. It is important to note that these data products are not photometrically accurate because some of the K-corona emissions might be removed during the background removal procedure.[See the disclaimer about the level 3 (version 1) data at <https://wispr.nrl.navy.mil/wisprdata>] We averaged the emissions over representative areas that define the transients and the background (host) streamers, and computed the relative difference (B_transient-B_streamer)/B_streamer. We found relative brightness increases of ≈ 80-95% for the April 2021 (edge-on case) event and ≈ 30-50% for the March 2022 (face-on case) event, which is brighter than what has been typically measured from 1 AU for the <cit.> blobs. As we show throughout this paper, the Sheeley blobs and the quasi-periodic structures captured by WISPR can be related to two different families of transients that result from a pinch-off reconnection at the tip of helmet streamers. §.§ Global context from near 1 AU observations To get a better context for these events, we construct WL maps of the streamer belt as observed from near 1 AU by LASCO-C2 over half a solar rotation. We show these maps in Fig. <ref> (panels b and d). Estimates of the heliospheric current sheet (HCS) derived from potential field source surface (PFSS) extrapolations are plotted (red dashed lines) to help us differentiate pseudo-streamers (unipolar structures) from the main streamer belt (where the magnetic polarity switches sign). Assuming that the above transients originate from and propagate within the streamer belt, we identified two possible source regions that have an inclination consistent with the April 2021 and March 2022 events observed by WISPR. Since the imaged transients significantly stand out from the background streamers, they should be located quite close to the Thomson sphere (see the magenta lines) where most of the WL emissions are expected to originate (see Sect. <ref>). We identified two possible source regions, a nearly aligned section of the streamer belt located within 60-110^∘ (and ≈ 0^∘, see panel b) of Carrington longitudes (and latitudes), and an inclined section of the streamer belt located within 220-280^∘ (and 20-0^∘, see panel d). In an independent study <cit.> determined a similar source region for the April 2021 event using a more sophisticated tracking technique, and hence supports the fact that this event was indeed propagating within the main streamer belt. The low coronal structures underlying the streamer belt, as observed in the extreme ultraviolet (EUV) by the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) are shown in panels a and c of Fig. <ref>. In panel c two major ARs can be clearly seen at longitudes 210^∘ and 245^∘, which are located underneath the potential source streamer that produced the March 2022 event. These ARs may prove important in the formation of streamer transients. The hot plasma in such active closed-field regions can be prone to expansion into the corona through thermal instability, bringing the AR loops that are frozen into that plasma to higher coronal heights, eventually up to the cusp of streamers. On the other hand, in panel a no significant AR is visible beneath the streamer that potentially produced the April 2021 event observed by WISPR. However, it has also been suggested that the stretching of streamers can naturally occur simply as the result of the magnetic field near the cusp being too weak to hold the thermal pressure exerted by the underlying plasma <cit.>. The two mechanisms could act together in the formation of streamer transients through the pinch-off reconnection process, which is actually supported by observations <cit.> and by simulations <cit.>. In particular, as we discuss in the results section (<ref>), that the pinch-off reconnection process can generate transients with a low frequency that is quite variable and dependent on the local coronal conditions beneath the streamer. Therefore, the presence and amount of ARs beneath streamers could implicitly affect the rate at which these low-frequency transients are produced. From past observations near 1 AU, these transients were detected with periods varying from ≈ 8-16 hr <cit.> to 0.5-2 days <cit.>. A future statistical study that links these heliospheric measurements to low atmospheric EUV observations would be helpful to better assess the contribution of ARs on the release of streamer transients. §.§ Tracking transients in WISPR J-maps The usual and efficient method to further characterise transient features and have more insights into their possible generation mechanism is to measure their periodicity. For this purpose, a method that has been widely used across the community is to track transients in distance-time maps called J-maps <cit.>. The bright features that appear in J-maps then provide insightful information on the periodicity, propagation speed, and acceleration profiles of the transients (albeit with some limitations, as discussed below). These maps are commonly produced by extracting pixels along a fixed direction (either at the ecliptic or at another position angle), and using the elongation angle (ϵ) as a measure of the angular distance away from Sun centre. Tracking transient features in heliospheric images has long been a delicate task, and even more for a rapidly moving and up-close imager like WISPR. New techniques have been developed to better track WISPR features, whether they are static <cit.> or dynamic <cit.>. These techniques include a number of corrections to account for the effects of spacecraft motion, perspective, and orbit out of the solar equatorial plane. For instance, a transient propagating radially outwards from the Sun does not necessarily remain at a constant position angle as it moves across the WISPR FOV, and its distance to WISPR may also vary. This can affect their appearance in J-maps. For instance, curved signatures were noted during a CME event that came close to the two heliospheric imagers on board STEREO <cit.>. Furthermore, when a target moves away from the observer, hence leading to an apparent slowing of its propagation speed, this can also produce curved signatures in J-maps as we see later for the April 2021 event. Performing a precise fitting of the transients observed by WISPR is beyond the scope of this study, and hence circumvents the need of a complex tracking technique as developed by <cit.>. The J-maps associated with the March 2022 and April 2021 events introduced earlier are shown in Fig. <ref>. The corresponding slits along which the J-maps were constructed are plotted in Fig. <ref> as green lines. These J-maps are classical distance–time maps, except that instead of the typical elongation angle that measures the angle away from the Sun centre, the radial distance to the Sun is used. The implicit assumption to determine this parameter, however, is that extracted pixels are projected onto the Thomson sphere, a reference surface where WL emissions are expected to be strongest (see discussion in Sect. <ref>). From these J-maps we were able to infer some insightful properties of the transient structures captured by WISPR, which we describe in the following paragraphs. Propagation profiles The J-maps allow us to perform a direct visual inspection of the speed profile of the propagating structures. For this purpose we fitted profiles of constant speed for several of the most visible stripes in Fig. <ref>. We note that several sub-structures could also be seen between some of the brightest fronts, but were too faint to be shown here. The March 2022 event (right panel) is fairly well described by constant speed profiles at ≈ 415 km/s. In contrast, the April 2021 event (left panel) shows curved stripes that deviate from constant speed profiles, and with a very low speed (≈ 160 km/s). Using a more sophisticated tracking technique on the April 2021 event, <cit.> obtained a similar low propagation speed of ≈ 190 km/s. Although such a low propagation speed could be due to perspective effects (e.g. of the transients moving away from WISPR), we show in Sect. <ref> that here this is probably associated with a very slow and dense slow solar wind flow within the HPS. Curved signatures in J-maps have already been found in STEREO observations for instance, <cit.> and also more recently in WISPR <cit.>. These apparent decelerations were in fact associated with the effect of the imaged structures getting closer to or farther away from the observer. Since PSP remained relatively `static' (i.e. co-rotating with the solar corona) at that time, that might suggest that the structure itself was moving with respect to WISPR. This is also consistent with the WL signatures that suggest that these flux ropes propagated along PSP orbital plane (see the green dashed line in the top panel of Fig. <ref>), where their legs may have come closer to or farther away from WISPR during their expansion. The pinch-off reconnection mechanism at the tip of streamers indeed predicts that such flux ropes develop large azimuthal extents, from their generation and during their expansion in the solar wind <cit.>. Periodicities The periodicities measured between the fitted stripes range from ≈ 110-120 min (Fig. <ref>, left panel) and ≈ 130-175 min (right panel) between the fitted stripes. This falls well within the typical ≈ 90-180 min range previously detected from near 1 AU observations. Furthermore, similar periodicities are also retrieved in an upcoming statistical study from <cit.> that includes a few PSP encounters. Because of the rapidly varying viewing conditions of WISPR, quasi-corotation phases do not last very long, and hence periodicities above ≈ 10 hr cannot easily be detected. Therefore, it remains difficult to check whether the longer ≈ 8-16 hr periods of streamer blobs measured previously from slowly moving 1 AU observatories also manifests in WISPR images. As we discuss in the next section, these periodicities could be byproducts of the pinch-off reconnection process occurring at the tip of streamers. More precisely, they could be the manifestation of multiple modes associated with the tearing instability that can develop at the HCS. §.§ Insights from plasma measurements taken in situ at PSP We check whether the low propagation speeds measured for the April 26, 2021, event imaged by WISPR-I are realistic, by making a rough comparison with the solar wind speeds measured in situ at PSP around that time, as shown in Fig. <ref>. This is possible because after that event, PSP entered in a pro-grade phase, and hence could a few days later sample a solar wind channel that probably hosted the transients captured by WISPR-I (see the PSP orbit plotted in the top panel of Fig. <ref>). In the top panel, we can see that PSP indeed measured very slow (purple dots) and dense (emerald green dots) solar wind with ≈ 160-250 km/s and ≈ 1-8× 10^3 1/cc at around 15 R_⊙. Such plasma flows are typical of coronal streamers at that distance <cit.>, and more generally of HPS <cit.>, which is also supported by a few HCS crossings during this interval (here simply identified by a global polarity inversion of the magnetic field, see middle panel). These SSWs are hence potential hosts of blobs and quasi-periodic structures like those observed by WISPR-I. Unfortunately, in this case the transients imaged by WISPR-I on April 26 (located within ≈ 8-16 R_⊙) already moved far away before PSP could get inside that streamer belt starting from April 29. Nevertheless, the solar wind velocities measured there (at ≈ 15 R_⊙, see the top and bottom panels of Fig. <ref>) closely match the ≈ 160 km/s fitted speed of the imaged transients. § MODELLING: METHOD Several main characteristics of transients in the slow wind have been extracted from WISPR observations. We introduce in this section our modelling approach to get more insights into the possible origin of these structures. We test the idea that the pinch-off reconnection process at the tip of streamers is responsible for the formation and release of the small transients observed by WISPR. This mechanism is tested first in an idealistic simulation of a very high-resolution time-dependent 2.5D MHD dipolar corona (Sect. <ref>), and then a lower resolution time-dependent 3D MHD simulation of the conditions encountered by PSP during its ninth passage near the Sun (Sect. <ref>). We then present in Sect. <ref> our approach to building synthetic products that can be compared with WISPR observations. §.§ Idealistic simulation of a dipolar corona <cit.> describe in detail the pinch-off reconnection mechanism induced by the tearing instability, in an idealistic 2.5D simulation of the solar corona. To allow a fair comparison with actual observations from WISPR, the <cit.> simulation was run again with more outputs (one every ≃13 min) to match the typical temporal cadence of WISPR. In figure <ref> we illustrate the main phases of the pinch-off reconnection mechanism, with several snapshots extracted from the simulation and zoomed-in views in the panels on the left side. Starting from a near equilibrium state (t=0 min, first row), the tip of the helmet streamer eventually expands (t=1540 min, second row) due to pressure imbalance between the closed-field plasma confined beneath (as in coronal loops) and the out-flowing plasma from the adjacent open field. As the helmet streamer expands a thinning also occurs at its back end, up to the point where the streamer gets sufficiently thin locally for the tearing instability to trigger magnetic reconnection (t=2349.8 min, third row), referred to as the `ballooning mode' in <cit.>. The ejecta of a large plasmoid of dense and initially closed-field material follows (t=2880.8 min, fourth row). The tearing instability further develops at smaller scales triggering reconnection at multiple secondary sites, and is called the `tearing mode'. Plasma material is pushed away from these reconnection sites and then accumulates in small-scale and dense plasmoids. More precisely, this plasma concentrates in shell-like structures where the magnetic field is mostly poloidal, as in magnetic flux ropes. In contrast, the core of these structures is less dense due to a dominant toroidal magnetic field component that is directed across the figure plane (see animation associated with Fig. <ref> available online,[<https://doi.org/10.5281/zenodo.8135596>] as well as those presented in the original paper by ). For the first time, the inner and outer WISPR telescopes combined can provide an in-depth view of the transient structures that form from pinch-off reconnection, right in their formation regions. As illustrated in figure <ref>, WISPR may see different signatures according to its distance from the Sun, where the dashed white and dark rectangles show the approximate WISPR-I FOV, assuming that PSP is located respectively at 35 R_⊙ and 10 R_⊙ from the Sun (10 R_⊙ being an estimate of the closest approach to be reached by 2024). For instance, it happens that some of the simulated transients eventually merge together along their propagation to form larger and/or denser plasmoids (see lower left panel of Fig. <ref>). Hence, we pursue here the work of <cit.> to examine how such simulated structures may look in a white-light imager such as WISPR. To produce synthetic white-light observables, we need first to extend the 2.5D simulated domain into three dimensions. We perform an axisymmetric demultiplication of the 2.5D simulation about the solar rotation axis, hence producing a 3D corona with a flat streamer belt at the equator. By changing the position of our virtual observer we can then test most situations encountered by WISPR along its orbits and more generally throughout the solar cycle, that is from a horizontal to vertical streamer belt configuration typical of a solar minimum and maximum, respectively. §.§ Case study simulation of the ninth PSP encounter Because the 2.5D set-up presented in <cit.> is highly idealistic, an attempt has been made to extend this work to a fully fledged 3D model that is called WindPredict-AW <cit.>. In this modelling framework, the 2.5D magnetic structures mentioned above translate into 3D magnetic flux ropes, where their generation and propagation can now be studied in a self-consistent manner. The disadvantage, however, is that the 3D set-up provides a lower level of spatial resolution than the 2.5D set-up. Since magnetic reconnection is allowed by numerical diffusion of the numerical scheme itself, it is bound by the actual numerical size of the mesh near the HCS. The 3D set-up is hence not optimal for the full development of the tearing instability, as is the idealistic set-up <cit.>. Despite this limitation, the 3D simulation still does produce transients but at low frequency, which are the ballooning modes and only the quasi-periodic structures with periods ≳ 4 hr. By applying a realistic photospheric magnetic map at the inner boundary, <cit.> was able to reproduce the statistical occurrence of streamer flux ropes that intersected both PSP and SolO during the joint observation campaign of June 2020. We here pursue the work of <cit.> with a similar 3D simulation set-up, but applied to the ninth PSP encounter (August 2021). The inner boundary is set with the GONG-ADAPT (11th realisation) magnetogram of August 14, 2021, 00:00 UT, and kept fixed over the entire simulated period. The magnetogram was selected among many different sources and dates to best match the observed shape and location of the streamer belt, as seen from 1 AU by SoHO-LASCO over a full solar rotation. The selection process is based on the method presented in <cit.>. Another criteria was also the correct prediction of both magnetic sectors and timing of HCS crossings measured in situ by PSP. Once the simulation relaxed, outputs of the entire 3D simulated domain were extracted every ≃ 13 min to match the actual cadence of WISPR. For the sake of computational time the simulation was run until 100 outputs were obtained, which covers a time interval of ≃ 22 hr starting at perihelion. The simulation is kept fixed outside this interval, and allows us to synthesise WISPR images over a longer period even though the simulated solar wind remains static. The static phase is still meaningful to differentiate the effect of the fast-moving probe from the propagation of the solar wind structures within the synthesised images. The procedure to produce WISPR synthetic images is described in Sect. <ref>. The simulated streamer belt and density structures propagating within its core are shown in Fig. <ref>, along with the FOV of both WISPR-I (in white) and WISPR-O (in grey). At that time WISPR was imaging from a distance of ≈ 26 R_⊙, a highly warped streamer belt typical of a high solar activity. Throughout the region scanned by WISPR, the streamer belt undergoes significant latitudinal shifts within ≈15-25^∘ of Carrington latitude. A few flux rope structures have been identified in the simulation (see the coloured arrows). All of them, except the farthest one (cyan arrow) produce visible WL signatures in the synthetic WISPR images (see Sect. <ref>). The flux ropes have different extents and widths that can be explained by a different stage of their formation and/or evolution. We also find that the spatial extent of these flux ropes within the streamer belt varies, and that it is delimited by intersections of pseudo-streamers with the main streamer belt <cit.>. This makes up a complex network that is inherently connected to the S-web <cit.>. Finally, each of these flux ropes shows a different inclination. All of this will affect their appearance from the WISPR perspective as we show in Sect. <ref>. §.§ Producing synthetic WISPR images Synthetic WISPR images are produced similarly to what was done in <cit.>, except that in the present work we used time-dependent simulations rather than static simulations. Following the Thomson scattering theory <cit.>, the total intensity received by a pixel detector from scattered electrons can be expressed as an integral along the path length z along each LOS: I^tot_t = ∫_z=0^z→ +∞ I_t dz=∫_z=0^z→ +∞ n_e z^2 G dz (in W.m^-2.sr^-1) G = B_⊙πσ_e/2z^2(2_1[(1-u)C+uD]_2-sinχ^2_1[(1-u)A+uB]_2) . Here I_t refers to the total (and not polarised) intensity, B_⊙≃ 2.3× 10^7 W.m^-2.sr^-1 the Sun's mean radiance (or surface brightness), and σ_e=r_e^2≃ 7.95× 10^-30 m^2 the electron cross-section. The electron density n_e is an input 3D datacube interpolated at each LOS point. The G function here includes contributions from both pure-geometric scattering (indicated by 1) and the solar illumination function (indicated by 2). For far distances to the Sun, G can be approximated as (R_⊙/r)^2(2-sinχ^2)=(R_⊙/r)^2(1+cosχ^2), where χ is called the scattering angle between the scattering site and the Sun-observer line, and (R_⊙/r)^2 represents the classical fall-off of sunlight with heliocentric distance r <cit.>. For an observer as close to the Sun as WISPR, additional effects should be considered, such as the collimation of sunlight and limb-darkening, using for instance the van de Hulst coefficients A, B, C, and D defined in <cit.>. A direct observation of Eq. (<ref>) shows that the integral is semi-infinite on the path length z. In practice, it is possible to shrink this integral to a limited (finite) region that includes most contributions to the total brightness (see discussion in Appendix <ref>). Theoretical works have shown that WL emissions produced from Thomson scattering are expected to peak at a surface called the Thomson sphere <cit.>. This can be geometrically defined by a sphere with its centre located halfway along the Sun–observer line, and with the length of this line for diameter. However, <cit.> and <cit.> have demonstrated that this peak at the Thomson sphere is very smeared out. Therefore, a detector such as WISPR would not be sensitive to electrons that are concentrated near the Thomson sphere, but rather to a much broader region on either side of the Thomson sphere (≈χ_TS± 45^∘) that is called the Thomson plateau <cit.>. An illustration of this effect for WISPR is given in <cit.>. Although there already are several numerical implementations of this theory within the scientific community <cit.>, we opted to develop a new algorithm that we can tailor to the specific constraints of WISPR and the needs of this study. The procedure is summarised below. Instrument definition: Given both ephemeris (positioning) and pointing information for our virtual instrument, we build a 2D matrix of LOS coordinates. Grid optimisation: A dynamic grid refinement algorithm adjusts the sampling along each LOS, so as to capture at best the smallest physical structures in the simulation box (see Appendix <ref>). The sample points are distributed from PSP, and pass through and beyond the Thomson sphere. For the WISPR images synthesised in this work, this represents 241 million sample points to be optimised. Thomson scattering computation: Given a 3D simulated datacube of electron density, the Thomson scattering formula (Eq. <ref>) is computed at each sample point. This includes beforehand an interpolation step that can be very costly, given the large number of sample points and the size of the input datacubes used in this work ((n_r,n_θ,n_ϕ)=(768,384,360) and (256,160,320) for the idealistic and fully fledged 3D set-up, respectively). LOS integration: The synthetic image is finally obtained by summing up all local contributions to the total brightness along each LOS. WISPR is a detector placed on a rapidly moving observatory sweeping extended regions of the solar corona in only a few days, together with a rapid variation in its distance to the Sun. The WISPR FOV must therefore be updated very regularly, which is done by rerunning phase 1 and 2 for every single image to be synthesised. As WISPR is also much closer to the imaged coronal structures, it is critical to keep an accurate tracking of WISPR's pointing by using the World Coordinate System (WCS). For this purpose, phase 1 exploits the IDL routines provided by the WISPR instrument team through the SolarSoft library. For the idealistic dipolar numerical set-up, however, we opted for a simple user-defined FOV that we can easily control to test different scenarios in a sandbox. This allows us to simulate various viewing conditions that WISPR-I have encountered (or may in the future) at distinct phases of the solar cycle. We define a FOV representative of WISPR-I in the helioprojective–Cartesian frame with HPLN=(10,50)^∘(azimuthal angle) and HPLT=(-20,20)^∘(elevation angle) where (HPLN=0,HPLT=0) points towards solar centre. We assume a null roll angle for simplicity. The helioprojective frame is a sphere centred at the observer position, which needs to be defined as well. We assumed a PSP-Sun distance of 35 R_⊙ and 10 R_⊙; the first is an average between the March 2022 and April 2021 events presented earlier, and the second is intended to represent the closest approach that PSP will ever reach in 2024. The remaining parameter is the latitude of our virtual observer; the longitude does not matter since the idealistic simulation is axisymmetric about the solar rotation axis. Varying the latitude θ allows us to mimic different inclinations of the streamer belt from the WISPR perspective, where large (or small) θ (in absolute value) are intended to be representative of a streamer belt seen face-on (or edge-on) during solar maximum (or minimum) conditions. An inclination angle θ of 0^∘ and 40^∘ has been assumed, for comparison with the April 2021 and March 2022 event, respectively. § MODELLING: RESULTS §.§ Idealistic simulation of a dipolar corona In Appendix <ref> we gather the raw (absolute brightness) synthetic WISPR images produced from the idealistic dipolar modelling set-up introduced in Sect. <ref>. To enhance the visibility of transient structures, we follow the base difference method where a background image (here computed as the average brightness over the entire time interval) is subtracted from each individual image. The resulting base-difference synthetic images are shown in Figs. <ref>-<ref>, where bright or dark colours respectively correspond to an enhancement or depletion in electron density with respect to the background solar wind. These base-difference images reveal faint brightness variations much more clearly, and thanks to the new adaptive grid refinement method developed for this paper, small-scale density structures are rendered with great precision. This manifests as very smooth brightness variations across the LOS, where otherwise sharpness would indicate an inappropriate sampling along the LOS. Some small spurious features (especially in the θ=40^∘ case) are sometimes seen. These are remnant artefacts from the adaptive grid refinement method that need further adjustments (discussed in Sect. <ref>). WL signatures We focus first on the two right-hand side panels of Figs. <ref>-<ref>, where the distance of PSP is taken close to that of the April 2021 and March 2022 events observed by WISPR (i.e. 35 R_⊙). The four rows cover two full cycles of the development of the tearing instability. The t=2880.8 min snapshot (bottom rows) is the one that illustrates the different phases best. We can group the synthetic WL signatures in two main families: bright diffuse patches and bright and more concentrated emissions. They are described below. The bright diffuse patches result from the main onsets of the tearing instability (i.e. the ballooning mode; see the orange arrows). Due to their rather large scale, they are likely the Sheeley blobs that have long been observed from 1 AU, and first detected by SoHO-LASCO <cit.>. When seen edge-on (θ=0^∘, Fig. <ref>) they show quite significant brightness enhancement of up to ≈ 35%; instead, when seen with some inclination (θ=40^∘, Fig. <ref>) they appear slightly dimmer with brightness enhancements below ≈ 10%. Because the simulated transients have here an infinite extent in azimuth, they show drifting signatures towards the FOV edges as they pass over WISPR location (see second row of Fig. <ref>, right panel), which are similar to WL signatures that were observed when WISPR approached and went through the streamer belt (; see also the simulations by ). Nonetheless, having infinite azimuthal extents for such transients is not realistic, as clearly shown by the March 2022 event, and we show in Sect. <ref> that this can be solved using the fully fledged 3D modelling set-up. The bright and more concentrated emissions exhibit quasi-periodic formation (see the green and purple arrows). Similar spatial distributions and widths to those of the April 2021 and March 2022 events observed by WISPR can be seen in the lower right panels of Figs. <ref> and <ref>, respectively. These quasi-periodic structures develop at the back of the main onsets described above, and can be connected to the long-observed hourly periodicities measured both remotely and in situ in the slow solar wind <cit.>. These structures are smaller in size than the streamer blobs discussed above, and hence we expect them to contribute less to the total brightness integrated along the LOS. Conversely, they exhibit much higher brightness enhancements because they contain a much higher concentration of plasma. Their brightness variation ranges ≈ 20-100% (edge-on case, θ=0^∘, Fig. <ref>) and ≈ 1-40% (face-on case, θ=40^∘, Fig. <ref>). Some of these structures propagate faster, and as a result sometimes coalesce with their preceding fellows or even merge with the main onset (see the purple arrows). In terms of brightness variation, there is a fair agreement between the simulated quasi-periodic structures and the transients observed by WISPR, that is ≈ 80-95% for the April 2021 event (edge-on case, to be compared with θ=0^∘) and ≈ 30-50% for the March 2022 event (face-on case, to be compared with θ=40^∘). On the other hand, the simulated ballooning modes (which can be associated with the Sheeley blobs) may be more difficult to see from the WISPR perspective as their signature is fainter and more diffuse. They are also less likely to be detected by WISPR due to their long periodicity, as we describe below in `Periodicities'. Access to shorter heliocentric distances might help to better resolve both structures as illustrated in the left panels of Figs. <ref>-<ref>, assuming the hypothetical ≃ 10 R_⊙ to be reached by PSP in 2024 at the closest approach. There the shrinking of the Thomson plateau (i.e. of the sensitive area of WISPR; see Sect. <ref>) should allow transients within streamers to more easily stand out from the background emissions. WISPR will also be able to observe these structures right in their formation region (3-7 R_⊙), and therefore might provide new clues about the tearing instability occurring at the HCS. Propagation velocities and acceleration profiles We now look at the kinematics of the simulated transients, by making J-maps as those shown in Fig. <ref>. A synthetic J-map for the θ=0^∘ case is given in Fig. <ref>, where the slit has been taken at the solar equator (i.e. along the streamer). The θ=40^∘ case is not shown because it does not change the rest of the analysis much. We retrieve here the two main families of WL signatures identified in the previous section. First, the wide patches associated with the main onsets of the tearing instability (i.e. the ballooning mode; see orange arrows). Second, the more concentrated emissions associated with quasi-periodic transients, not pinpointed here as they clearly stand out as bright thin stripes from the rest. The ballooning modes show quite different signatures in the J-map, with more curvature and less inclination. Their lower inclination indicates that they propagate at a slightly lower speed than the quasi-periodic structures, which are then likely to merge together as discussed previously. Their curvature may also be indicative of a more progressive acceleration until they reach their terminal speed after ≈ 10-15 R_⊙; in contrast, the fast quasi-periodic structures show clear constant speed profiles. Here the acceleration patterns are likely to be actual and not apparent accelerations as our WISPR-I FOV remains static in this simulation set-up (see Sect. <ref>), although LOS-integration effects could still contribute to these curvatures, as already discussed in Sect. <ref>. In both cases, the simulated transients reach a terminal speed of ≈ 250 km/s, which corresponds to the bulk speed of the very slow and dense wind at the core of the streamer belt in the simulation. Periodicities The bottom panel of Fig. <ref> shows the dominant periodicities over four full cycles of the tearing instability. The time elapsed between each main onset of the tearing instability (i.e. the ballooning modes) is quite variable, reaching ≈ 25 hr between the first two and ≈ 50 hr for the others. Here, this is mostly driven by how fast helmet streamer loops can grow due to pressure imbalance, and hence is highly sensitive to local coronal conditions. This long periodicity is then likely to vary significantly from one simulation to another, and more generally over solar longitudes and along the solar cycle (see the discussion in Sect. <ref>). In comparison, a shorter ≈ 8-16 hr period had been detected from past STEREO-A observations <cit.>, but this study focused on a few specific events around the solar maximum of cycle 24. More recently, an analysis of the observations taken by the STEREO-A COR-2 coronagraph near solar minimum has shown density variations in the streamer belt on timescales of 0.5-2 days <cit.>, which this time agree with our simulation. Due to a fast and highly elliptical orbit, WISPR is not appropriate to detect such long periodicities. Therefore, the legacy 1 AU observatories remain valuable assets, which in complement to the recently launched Solar Orbiter might finally allow us to better parametrise such events. Regarding the quasi-periodic structures generated between each main onset, a wavelet power spectrum <cit.> reveals dominant periods around ≈ 2-3 hr=120-180 min and ≈ 7-10 hr. In addition, we note that the simulation exhibits some periodicities as low as ≈ 1.5 hr=90 min (at t=100-105 hr). These results are in good agreement with the ≈ 90-180 min to ≈ 8-16 hr periods that have been typically detected from 1 AU <cit.>. More specifically, they also agree with those measured during the April 2021 (130-175 min) and March 2022 (110-120 min) events observed by WISPR. §.§ Case study simulation of the ninth PSP encounter We now exploit the fully fledged 3D simulation set-up introduced in Sect. <ref> and detailed in <cit.>. Starting from April 2021 (eighth encounter), interpreting WISPR observations has become highly challenging even with the use of such state-of-the-art modelling. This is primarily due to PSP diving much deeper inside the nascent solar wind and to an increase in the solar activity resulting in a much more structured corona. Tremendous efforts in tuning-up the model parameters would be required for a fair one-to-one comparison with the actual WISPR observations taken during the ninth encounter (August 2021). A dynamic update of the magnetic map at the inner boundary would also be essential to reach this goal. We leave this for future works, but there is still a valuable set of information that we can extract from this simulation to feed the current discussion. We present in figure <ref> the result of our forward modelling method applied to this simulation. Similarly to Figs. <ref>-<ref>, a difference method is used to better visualise brightness fluctuations due to transient propagating structures. However, computing a mean background over the entire time interval is no longer appropriate since our virtual WISPR observer is no longer static. We then follow the well-known running difference method here, where the mean background image is computed over a sliding temporal window. For completeness and a more realistic impression, the raw synthetic products in absolute brightness are also shown in Fig. <ref>. In contrast to the idealistic modelling set-up, a wealth of signatures are produced here with a rich diversity of shapes, extents, and locations. They are indicated in Fig. <ref> by coloured arrows, which can be directly connected to the flux rope structures present in the simulation snapshot shown in Fig. <ref>. Transients can be seen throughout the WISPR-I/O FOVs. Overall they show great topological similarity with the April 2021 and March 2022 events observed by WISPR-I (see Fig. <ref>), namely arch-like (top panel) and blob-like (bottom panel) signatures representative of flux ropes seen face-on and edge-on, respectively. This time, the fully fledged 3D set-up is more realistic with arch signatures of finite (and not infinite) spatial extent because it includes secondary (or pseudo) streamer structures that were missing in the idealistic dipolar set-up (see Sect. <ref>). Furthermore, using Fig. <ref> we can estimate that the simulated transients induce a similar increase in relative brightness of ≈1-4% during their passage, although they are located at different distances from WISPR. These values are quite low compared to the brightness variations estimated above from WISPR observations and the dipolar corona set-up because the background field here is computed over a much shorter temporal window, which is not representative of the emissions from the background streamers. For this purpose we should use instead the raw synthetic images (in absolute brightness) shown in Fig. <ref>, and compare the absolute brightness inside and just outside the transients (similarly to what we did for the real WISPR observations). By doing so we obtain more reasonable brightness variations of ≈8-17%. Many of the brightness variations visible in Fig. <ref> do not result from a propagating transient feature, but from a change in the viewing conditions of WISPR as PSP flies rapidly along its orbit. This has been seen many times, for instance from the drift of streamer rays towards the FOV edges <cit.>. This effect can be visualised in a supplementary movie provided with Fig. <ref>. Starting from August 09, 2021, 18:37 UT, the movie shows the effect of PSP moving throughout the streamer belt, for the first time with a fully dynamic 3D modelling set-up that includes the self-generation of streamer transient structures. As WISPR moves closer to (and probably into) these imaged transients, their morphology changes quite significantly, which is the consequence of a change in the perspective and of a change in the sensitivity area of WISPR (see Sect. <ref>). Our comparison basis with actual observations can only be qualitative here for all the reasons mentioned above, such as the fact that the simulation does not cover the entire WISPR interval studied here (see Sect. <ref>). Even so, we were able to identify many similarities between this synthetic movie and the newest WISPR observations starting from August 2021 (official movies for all WISPR observations can be found online[<https://wispr.nrl.navy.mil/encounter-summaries>]). § DISCUSSION In this study we focused only on a few transient events observed by WISPR that we considered as promising candidates produced by the streamer pinch-off reconnection mechanism. The April 26 and March 01 events were picked especially for their great visibility, but do not represent the full set of observations. WISPR observations show a plethora of transient features, in particular in the late PSP encounters (from September 2020 and on). Further studies are required to infer those transient properties in a statistical manner <cit.>. Although the two modelling set-ups presented in this work show a great potential for interpreting some of the transient nature of WISPR images, there are still many improvements that are needed. Significant advances have been made in the forward modelling procedure compared to our previous study <cit.>, where most of the previous difficulties have been resolved. The adaptive grid refinement method allows for a much more accurate sampling of the LOS, which results in a very smooth rendering of small-scale density structures. Despite the high precision achieved, we still note some remaining artefacts of minor importance in the difference-based synthetic products. For the sake of computational tractability, for each individual image we had to keep the number of optimisation steps for the LOS sampling to a maximum of 30 in our case. This implies that the optimisation procedure may not fully converge for all LOS, at times generating some spurious features. These are minor inconveniences, and now the performance of our white-light rendering code is primarily limited by the quality of the input simulation. Future efforts should then concentrate on pushing existing solar coronal or wind models beyond their current capabilities in order to allow for a thorough understanding of the latest WISPR observations. In light of this work, we present below some areas of progress that could be addressed. Coronal structure fidelity: Current comparisons against WISPR observations greatly suffer from a misplacement of the main streamer belt and pseudo-streamer structures. This highlights the unique capability of WISPR to provide more stringent constraints to current coronal models. For instance the magnetic map set at the inner boundary is known to have a critical impact on the performance of existing MHD models. Having a magnetic map that is updated dynamically over time will be necessary improve comparisons with recent WISPR observations. On-going efforts have also been carried on towards a systematic benchmarking of MHD models against observations <cit.>, and WISPR observations could both greatly benefit from and support such works. Spatial resolution: Although the fully fledged 3D simulation set-up depicts a much more realistic solar coronal structure, it only permits a partial development of the tearing instability compared to the idealistic but much more (spatially) resolved 2.5D set-up. Future works will have to refine the spatial resolution around the HCS further, but that is extremely challenging in such global 3D MHD models while maintaining computational tractability. Temporal cadence and duration: Together with a sufficiently spatially resolved model, temporal cadence is also important to track these streamer transients as they rapidly propagate throughout the WISPR FOV. This will be important especially in the years to come when PSP will be able to sample these transients right in their formation region (i.e. the 10 R_⊙ case treated with the 2.5D set-up). Having simulation snapshots at a higher cadence than WISPR could also help to get a better match by allowing the synthetic images to mimic the `blurring' effect of exposure time. Furthermore, simulation duration of more than one day would be preferable to maximise the scientific output. This can easily be achieved in a 2.5D context, but implies large amounts of data in full 3D modelling set-ups, a challenge to reach with modern computational facilities. § CONCLUSION The variability of the slow wind that originates from streamers has been analysed in light of the latest observations taken by WISPR. A few transient events have been identified with periodicities that are consistent with the previous 90-180 min range detected from near 1 AU observations. The pinch-off reconnection mechanism occurring at the tip of helmet streamers has long been predicted as a potential source mechanism of these quasi-periodic structures. For the first time this work provides strong evidence to support this scenario using two advanced MHD models of the solar wind and corona, each with its own pros and cons. Both give rise to the same fundamental process, however. First there is a pressure instability of the coronal loops that are lodged beneath the helmet streamers that allows them to rise in the corona. They stretch to a point where the current sheet that develops at their back becomes so thin that magnetic reconnection eventually occurs via the tearing instability. A large flux rope made of streamer material is then released (i.e. the main onset or ballooning mode), corresponding to the Sheeley blobs that were first detected in SoHO-LASCO. Behind this main ejecta follows the further development of the tearing instability at the HCS, that generates a myriad of quasi-periodic smaller-scale structures. We show that these quasi-periodic structures exhibit local density enhancements that are strong enough to be detected by WISPR, and that they actually show great topological similarities with two real events captured by WISPR. In addition, the simulated quasi-periodic structures have periodicities that agree well with these events, and also more generally with the 90-180 min range detected in past observations. These quasi-periodic structures could be reproduced in an idealistic dipolar set-up thanks to a very high spatial resolution at the HCS. However, this set-up lacked some consistency to simulate properly the actual WL signatures observed by WISPR. A global fully fledged 3D MHD model was then necessary to simulate the aspect of these structures in a self-consistent manner. However, the coarser spatial resolution did not allow the full development of the tearing instability, and thus of the quasi-periodic structures where only the ≳ 4 hours long periodicities could be reproduced. This work highlights the importance of the tearing instability occurring at the tip of streamers to fuel the long-observed high variability of the slow solar wind. Furthermore, we discuss how extremely challenging the latest (and upcoming) WISPR observations are to interpret, even just for the quasi-steady component of the slow wind because PSP is diving deeper and deeper within a solar corona that becomes highly structured with the rising phase of the solar cycle. Therefore, WISPR offers new stringent constraints to push existing models of the solar corona and wind beyond their current capability, which in turn should help in better understanding WISPR observations. § ACKNOWLEDGEMENTS The authors are indebted to an anonymous referee whose valuable suggestions greatly improved this work. This research has been funded by the ERC SLOW_SOURCE (DLV-819189) and NRC ORCS (324523) projects. A.K. was supported by NASA’s Parker Solar Probe mission under contract NNN06AA01C. The authors are grateful to Nicholeen Viall and Angelos Vourlidas for insightful discussions, and the WISPR team for providing the data. The Wide-Field Imager for Parker Solar Probe (WISPR) instrument was designed, built, and is now operated by the US Naval Research Laboratory in collaboration with Johns Hopkins University/Applied Physics Laboratory, California Institute of Technology/Jet Propulsion Laboratory, University of Gottingen, Germany, Centre Spatial de Liege, Belgium and University of Toulouse/Research Institute in Astrophysics and Planetology. Parker Solar Probe was designed, built, and is now operated by the Johns Hopkins Applied Physics Laboratory as part of NASA's Living with a Star (LWS) program. The authors also thanks A. Mignone and the PLUTO development team, on which WindPredict-AW is based. The 2.5D and 3D WindPredict-AW simulations were performed on the Jean-Zay supercomputer (IDRIS), through the GENCI HPC allocation grant A0130410293. The photospheric magnetic maps used in this work are produced collaboratively by AFRL/ADAPT and NSO/NISP. The SOHO/LASCO data are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut für Aeronomie (Germany), Laboratoire d’Astronomie (France), and the University of Birmingham (UK). SOHO is a project of international cooperation between ESA and NASA. This work made use of the data mining tools AMDA[<http://amda.irap.omp.eu/>] developed by the Centre de Données de la Physique des Plasmas (CDPP) and with financial support from the Centre National des Études Spatiales (CNES). This study also used the NASA Astrophysics Data System (ADS[<https://ui.adsabs.harvard.edu/>]), the open-source GNU Image Manipulation Program (GIMP[<https://www.gimp.org/>]) and the ImageJ[<http://imagej.nih.gov/ij>] image processing tool developed by Wayne Rasband and contributors at the National Institutes of Health, USA. aa § COMPUTATIONAL CHALLENGES AND METHOD Having underresolved LOSs has been shown to have significant consequences on the synthetic images produced <cit.>, and as such 121 points appeared to be sufficient to resolve the features studied in that previous work. However, the idealistic dipolar set-up introduced in Sect. <ref> requires resolving spatial structures as small as 0.1 R_⊙. As a consequence, running again our synthesising script with 121 points only was no longer adequate. A significant improvement compared to our previous work <cit.> has then been to push the LOSs resolution from 121 to 241 points, resulting in several computational challenges to tackle, which are discussed below. Synthesising a white-light image of 1000-by-1000 pixels implies computing multiple 3D matrices with up to 241 million elements each, for LOSs resolved with 241 points. In typical 32GB memory systems this quickly leads to a memory overflow. One main challenge has been to optimise the code so as to minimise the memory usage, and at the same time maximise the workload on CPUs. A prior step before actual computation is to break the image down into smaller sections, where the number of sections is adjusted automatically to ensure maximised performances. Each sub-section is then computed in parallel, hence using at most the capacity of current multi-core systems. In theory the LOSs could be better resolved than 241 points, resulting in more sub-sections to compute, and hence to a longer computational time. However, we restrained ourselves to 241 points instead and worked on optimising the point distribution along each LOS. To do so one needs to have a prior idea of which portions of the LOSs need to be better resolved than others. The Thomson scattering (introduced in Sect. <ref>) served as a basis to optimise the point distribution, following a two-step procedure described below. We start by defining a uniform grid in scattering angle χ that covers both the foreground (χ=90^∘→χ=χ_max) and background (χ=χ_min→χ=90^∘) with respect to the Thomson sphere (χ=90^∘), where χ_min=0^∘ and χ_max=180^∘-α are the asymptotic limits to the acceptable range of χ angles (α is the central angle between the LOS and the observer–Sun line). Using a χ-defined uniform grid is convenient as it naturally produces a non-uniform grid in the path length z (i.e. the distance along a LOS from the observer's position), with a minimum spatial step near the Thomson sphere. We make a first computation of the total brightness on this uniform grid. Before proceeding to the grid optimisation described afterwards, the spatial extent of each LOS is reduced to a region that accounts for most of the total integrated brightness; we used 99% in this paper. The upper χ_u and lower χ_l limit to the integral of the total brightness (Eq. (<ref>)) are determined when the ratio ℛ=∑_χ=90^∘^χ_l,u n_e z^2 G dz/∑_χ=90^∘^χ_min,max n_e z^2 G dz reaches 0.99 in both the foreground and the background. This allows us to save more grid points in needed areas and to maximise the efficiency of the grid refinement step described in the next paragraph. In the second step we implemented an adaptive grid method following a similar approach to that used by <cit.>, where the spatial refinement adapts dynamically according to the physical structures to be resolved, which are here the density structures along each LOS. For this purpose we define the grid point density by a function that includes the local distribution of the total (i.e. not polarised) WL intensity along each LOS R^k+1_i = 1/c√(w_1(∑_i=1^i=nlosΔχ^k_i/χ_u-χ_l)^2 + w_2(I^k_t,i/Δ z^k_i/mean(I^k_t,i/Δ z^k_i))^2) c = √(w_1+w_2), where the lower i and upper k sub-scripts refer to the spatial index (position along the LOS) and the optimisation iteration number, respectively. This formulation allows us to apply multiple optimisation criteria pondered by their respective weights w_*, all chosen as equal to one in this paper (after performing several tests). The first criterion (left term) allows us to constrain the extent of the optimised grid near the previously determined 99% range of interest. The second criterion purely depends on the studied physical system where more points are set where the local intensity is greater. One should make sure to use normalised quantities to get a proper balance between each criterion. Using the mean value of the integrated intensity mean(I^k_t,i/Δ z^k_i) along each LOS appears to work best. We note that w_2=0 would lead to a uniform grid in χ. The actual spatial step in χ angle at the next optimisation step is then determined by Δχ^k+1_i = Δχ^k_iR^k_i/R^k+1_i . This procedure is applied to each LOS and is repeated iteratively until the brightness integrated along each LOS converges to a stable value, which is defined as <1% variation with previous iteration in this work. To accelerate the optimisation process, the optimised grid is re-used from one image to another as a new initialisation instead of the χ-defined uniform grid. § RAW SYNTHETIC PRODUCTS, ABSOLUTE BRIGHTNESS In terms of absolute brightness most of the transient signatures remain relatively faint over the background solar wind (see the coloured arrows in Figs. <ref>-<ref>), even though our virtual WISPR observer is imaging them from a very close distance. The idealistic case of an inclined streamer belt (θ=40^∘, Fig. <ref>) is even dimmer compared to the LOS-aligned streamer belt case (θ=0^∘, Fig. <ref>), as a much smaller portion of these transients is integrated along the LOS. A similar comment can be made concerning the absolute brightness images synthesised from the fully fledged 3D modelling set-up (see Fig. <ref>), for which the results are discussed in Sect. <ref>. Here only one flux rope structure barely stands out from the background. For a better visualisation of these transients, we decided to work primarily with difference images as those shown in the core text.
http://arxiv.org/abs/2307.04832v2
20230710181133
Odd Entanglement Entropy in $\text{T}\bar{\text{T}}$ deformed CFT$_2$s and Holography
[ "Debarshi Basu", "Saikat Biswas", "Ankur Dey", "Boudhayan Paul", "Gautam Sengupta" ]
hep-th
[ "hep-th" ]
]Debarshi BasuE-mail:  ]Saikat BiswasE-mail:  ]Ankur DeyE-mail:  ]Boudhayan PaulE-mail:  ]Gautam SenguptaE-mail:  [] Department of Physics, Indian Institute of Technology, Kanpur 208 016, India Odd Entanglement Entropy in TT̅ deformed CFT_2s and Holography [ ============================================================== empty We construct a replica technique to perturbatively compute the odd entanglement entropy (OEE) for bipartite mixed states in TT̅ deformed CFT_2s. This framework is then utilized to obtain the leading order correction to the OEE for two disjoint intervals, two adjacent intervals, and a single interval in TT̅ deformed thermal CFT_2s in the large central charge limit. The field theory results are subsequently reproduced in the high temperature limit from holographic computations for the entanglement wedge cross sections in the dual bulk finite cut-off BTZ geometries. We further show that for finite size TT̅ deformed CFT_2s at zero temperature the corrections to the OEE are vanishing to the leading order from both field theory and bulk holographic computations. § INTRODUCTION Quantum entanglement has emerged as a prominent area of research to explore a wide range of physical phenomena spanning several disciplines from quantum many body systems in condensed matter physics to issues of quantum gravity and black holes. The entanglement entropy (EE) has played a crucial role in this endeavor as a measure for characterizing the entanglement of bipartite pure quantum states although it fails to effectively capture mixed state entanglement due to spurious correlations. In this context several mixed state entanglement and correlation measures such as the reflected entropy, entanglement of purification, balanced partial entanglement etc. have been proposed in quantum information theory. Interestingly it was possible to compute several of these measures through certain replica techniques for bipartite states in two dimensional conformal field theories (2s). In this connection the Ryu Takayanagi (RT) proposal <cit.> quantitatively characterized the holographic entanglement entropy (HEE) of a subsystem in s dual to bulk geometries through the / correspondence. This was extended by the Hubeny Rangamani Takayanagi (HRT) proposal <cit.> which provided a covariant generalization of the RT proposal for time dependent states in s dual to non static bulk geometries. The RT and HRT proposals were later proved in <cit.>. Recently another computable measure for mixed state entanglement known as the odd entanglement entropy (OEE) was proposed by Tamaoka in <cit.>. The OEE may be broadly understood as the von Neumann entropy of the partially transposed reduced density matrix of a given subsystem <cit.>.[This is a loose interpretation as the partially transposed reduced density matrix does not represent a physical state and may contain negative eigenvalues <cit.>.] The author in <cit.> utilized a suitable replica technique to compute the OEE for a bipartite mixed state configuration of two disjoint intervals in a 2. Interestingly in <cit.> the author proposed a holographic duality relating the OEE and the EE to the bulk entanglement wedge cross section (EWCS) for a given bipartite state in the 3/2 scenario. For recent developments see <cit.>. On a different note it was demonstrated by Zamolodchikov <cit.> that 2s which have undergone an irrelevant deformation by the determinant of the stress tensor (known as deformations) exhibit exactly solvable energy spectrum and partition function. These theories display non local UV structure and have an infinite number of possible RG flows leading to the same fixed point. A holographic dual for such theories was proposed in <cit.> to be a bulk 3 geometry with a finite radial cut-off. This proposal could be substantiated through the matching of the two point function, energy spectrum and the partition function between the bulk and the boundary (see <cit.> for further developments). The authors in <cit.> computed the HEE for bipartite pure state configurations in various deformed dual s. Subsequently the authors in <cit.> obtained the reflected entropy and its holographic dual, the EWCS, for bipartite mixed states in deformed dual 2s. Recently the entanglement negativity for various bipartite mixed states in deformed thermal 2s, and the corresponding holographic dual for bulk finite cut-off BTZ black hole geometries were computed in <cit.>. Motivated by the developments described above, in this article we compute the OEE for various bipartite mixed states in deformed dual 2s. For this purpose we construct an appropriate replica technique and a conformal perturbation theory along the lines of <cit.> to develop a path integral formulation for the OEE in deformed 2s with a small deformation parameter. This perturbative construction is then utilized to compute the first order corrections to the OEE for two disjoint intervals, two adjacent intervals, and a single interval in a deformed thermal 2 with a small deformation parameter in the large central charge limit. Subsequently we explicitly compute the bulk EWCS for the above mixed state configurations in the deformed thermal dual 2s by employing a construction involving embedding coordinates as described in <cit.>. Utilizing the EWCS obtained we demonstrate that the first order correction to field theory replica technique results for the OEE in the large central charge and the high temperature limit match exactly with the first order correction to the sum of the EWCS and the HEE verifying the holographic duality between the above quantities in the context of deformed thermal 2s. Following this we extend our perturbative construction to deformed finite size 2s at zero temperature and demonstrate that the leading order corrections to the OEE are vanishing, which is substantiated through bulk holographic computations involving the EWCS. This article is organized as follows. In <ref> we briefly review the basic features of deformed 2s and the OEE. In <ref> we develop a perturbative expansion for the OEE in a deformed 2. In <ref> this perturbative construction is then employed to obtain the leading order corrections to the OEE for various bipartite states in a deformed thermal 2. Following this we explicitly demonstrate the holographic duality for first order corrections between the OEE and the sum of the bulk EWCS and the HEE for these mixed states. Subsequently in <ref> we extend our perturbative analysis to a deformed finite size 2 at zero temperature and show that the leading order corrections to the OEE are zero. This is later verified through bulk holographic computations. Finally, we summarize our results in <ref> and present our conclusions. Some of the lengthy technical details of our computations have been described in <ref>. § REVIEW OF EARLIER LITERATURE §.§ deformation in a 2 We begin with a brief review of a two dimensional conformal field theory deformed by the operator defined as follows <cit.> <TT̅>=1/8(<T_ab><T^ab>-<T^a_a>^2). It is a double trace composite operator which satisfies the factorization property <cit.>. The corresponding deformation generates a one parameter family of theories described by a deformation parameter μ (≥ 0) as given by the following flow equation <cit.> dℐ_QFT^(μ)/dμ=∫ d^2x (TT̅)_μ  ,  ℐ_QFT^(μ)|_μ=0=ℐ_CFT , where ℐ_QFT^(μ) and ℐ_CFT represent the actions of the deformed and undeformed theories respectively. The deformation parameter μ has dimensions of length squared. Note that the energy spectrum may be determined exactly for a deformed 2 <cit.>. When μ is small, the action of the deformed 2 may be perturbatively expanded as <cit.> ℐ_QFT^(μ)=ℐ_CFT+μ∫ d^2x (TT̅)_μ=0 =ℐ_CFT+μ∫ d^2x (TT̅-Θ^2) , where T≡ T_ww, T̅≡ T_w̅w̅ and Θ≡ T_ww̅ describe the components of the stress tensor of the undeformed theory expressed in the complex coordinates (w,w̅). Our investigation focuses on deformed 2s at a finite temperature, and finite size deformed 2s at zero temperature, which are defined on appropriate cylinders. The expectation value of Θ vanishes on a cylinder and the Θ^2 term in <ref> may be dropped from further consideration <cit.>. §.§ Odd entanglement entropy We now focus our attention on a bipartite mixed state correlation measure termed the odd entanglement entropy (OEE), which approximately characterizes the von Neumann entropy for the partially transposed reduced density matrix of a given bipartite system <cit.>. In this context we begin with a bipartite system comprising the subsystems A and B, described by the reduced density matrix ρ_AB defined on the Hilbert space ℋ_AB=ℋ_A⊗ℋ_B, where ℋ_A and ℋ_B denote the Hilbert spaces for the subsystems A and B respectively. The partial transpose ρ_AB^T_B for the reduced density matrix ρ_AB with respect to the subsystem B is then given by e^(A)_ie^(B)_jρ_AB^T_Be^(A)_ke^(B)_l=e^(A)_ie^(B)_lρ_ABe^(A)_ke^(B)_j, where |e^(A)_i⟩ and |e^(B)_j⟩ describe orthonormal bases for the Hilbert spaces ℋ_A and ℋ_B respectively. The Rényi odd entropy of order n_o between the subsystems A and B may be defined as <cit.> S_o^(n_o)(A:B)=1/1-n_olog[Tr(ρ_AB^T_B)^n_o], where n_o is an odd integer. The OEE between the subsystems A and B may now be defined through the analytic continuation of the odd integer n_o→ 1 in <ref> as follows <cit.> S_o(A:B)=lim_n_o→ 1[S_o^(n_o)(A:B)]=lim_n_o→ 11/1-n_olog[Tr(ρ_AB^T_B)^n_o]. §.§ Odd entanglement entropy in a 2 The subsystems A and B in a 2 may be characterized by the disjoint spatial intervals [z_1,z_2] and [z_3,z_4] in the complex plane [with x_1<x_2<x_3<x_4 , x= Re(z)]. In <cit.> the author advanced a replica technique to compute the OEE for bipartite systems in a 2. The replica construction involves an n_o sheeted Riemann surface ℳ_n_o (where n_o∈ 2ℤ^+-1) prepared through the cyclic and anti cyclic sewing of the branch cuts of n_o copies of the original manifold ℳ along the subsystems A and B respectively. Utilizing the replica technique, the trace of the partial transpose in <ref> may be expressed in terms of the partition function on the n_o sheeted replica manifold as follows <cit.> Tr(ρ_AB^T_B)^n_o =ℤ[ℳ_n_o]/(ℤ[ℳ])^n_o . The relation in <ref> may be utilized along with <ref> to express the OEE in terms of the partition functions as follows S_o(A:B)=lim_n_o→ 11/1-n_olog[ℤ[ℳ_n_o]/(ℤ[ℳ])^n_o]. The partition function in <ref> may be expressed in terms of an appropriate four point correlation function of the twist and anti twist operators σ_n_o and σ̅_n_o located at the end points of the subsystems A and B as follows <cit.> ℤ[ℳ_n_o]/(ℤ[ℳ])^n_o =⟨σ_n_o(z_1,z̅_1)σ̅_n_o(z_2,z̅_2) σ̅_n_o(z_3,z̅_3)σ_n_o(z_4,z̅_4) ⟩. We are now in a position to express the OEE between the subsystems A and B in terms of the four point twist correlator by combining <ref> as follows <cit.> S_o(A:B)=lim_n_o→ 11/1-n_olog[ ⟨σ_n_o(z_1,z̅_1)σ̅_n_o(z_2,z̅_2) σ̅_n_o(z_3,z̅_3)σ_n_o(z_4,z̅_4) ⟩]. Note that σ_n_o and σ̅_n_o represent primary operators in 2 with the following conformal dimensions <cit.> h_n_o=h̅_n_o=c/24(n_o-1/n_o). We also note in passing the conformal dimensions of the twist operators σ_n_o^2 and σ̅_n_o^2, which are given as follows <cit.> h_n_o^(2)=h̅_n_o^(2)=h_n_o=c/24(n_o-1/n_o). §.§ Holographic odd entanglement entropy We now follow <cit.> to present a brief review of the EWCS. Let M be any specific time slice of a bulk static geometry in the context of d+1/d framework. Consider a region A in ∂ M. The entanglement wedge of A is given by the bulk region bounded by A∪Γ_A^ min, where Γ_A^ min is the RT surface for A. It has been proposed to be dual to the reduced density matrix ρ_A <cit.>. To define the EWCS, we subdivide A=A_1∪ A_2. A cross section of the entanglement wedge for A_1∪ A_2, denoted by Σ_A_1A_2, is defined such that it divides the wedge into two parts containing A and B separately. The EWCS between the subsystems A_1 and A_2 may then be defined as <cit.> E_W (A_1:A_2)=Area(Σ_A_1A_2^ min)/4G_N , where Σ_A_1A_2^ min represents the minimal cross section of the entanglement wedge. In <cit.> the author proposed a holographic duality describing the difference of the OEE and the EE in terms of the bulk EWCS of the bipartite state in question as follows S_o (A_1:A_2) - S (A_1 ∪ A_2) = E_W (A_1:A_2) , where S(A_1 ∪ A_2) is the EE for the subsystem A_1 ∪ A_2, and E_W (A_1:A_2) represents the EWCS between the subsystems A_1 and A_2 respectively. § OEE IN A DEFORMED 2 In this section we develop an appropriate replica technique similar to those described in <cit.> for the computation of the OEE for various bipartite mixed state configurations in a deformed 2. To this end we consider two spatial intervals A and B in a deformed 2 defined on a manifold ℳ. The partition functions on ℳ and ℳ_n_o for this deformed theory may be expressed in the path integral representation as follows [refer to <ref>] ℤ[ℳ] = ∫_ℳ𝒟ϕ e^-ℐ_QFT^(μ)[ϕ] , ℤ[ℳ_n_o] = ∫_ℳ_n_o𝒟ϕ e^-ℐ_QFT^(μ)[ϕ] . When the deformation parameter μ is small, <ref> may be utilized to express the OEE as S_o^(μ)(A:B)=lim_n_o→ 11/1-n_olog[∫_ℳ_n_o𝒟ϕ e^-ℐ_CFT-μ∫_ℳ_n_o(TT̅)/(∫_ℳ𝒟ϕ e^-ℐ_CFT-μ∫_ℳ(TT̅))^n_o] , where the superscript μ has been used to specify the OEE in the deformed 2. The exponential factors in <ref> may be further expanded for small μ to arrive at S_o^(μ)(A:B) =lim_n_o→ 11/1-n_olog[∫_ℳ_n_o𝒟ϕ e^-ℐ_CFT(1-μ∫_ℳ_n_o(TT̅)+𝒪(μ^2))/[∫_ℳ𝒟ϕ e^-ℐ_CFT(1-μ∫_ℳ(TT̅)+𝒪(μ^2))]^n_o] =S_o^(CFT)(A:B)+lim_n_o→ 11/1-n_olog[(1-μ∫_ℳ_n_oTT̅_ℳ_n_o)/(1-μ∫_ℳTT̅_ℳ)^n_o] . The term S_o^(CFT)(A:B)≡ S_o^(μ=0)(A:B) in <ref> represents the corresponding OEE for the undeformed 2. The expectation values of the operator on the manifolds ℳ and ℳ_n_o appearing in <ref> are defined as follows TT̅_ℳ=∫_ℳ𝒟ϕ e^-ℐ_CFT(TT̅)/∫_ℳ𝒟ϕ e^-ℐ_CFT , TT̅_ℳ_n_o=∫_ℳ_n_o𝒟ϕ e^-ℐ_CFT(TT̅)/∫_ℳ_n_o𝒟ϕ e^-ℐ_CFT . The second term on the right hand side of <ref> may be simplified to obtain the first order correction in μ to the OEE due to the deformation as follows δ S_o(A:B) = -μlim_n_o→ 11/1-n_o[∫_ℳ_n_oTT̅_ℳ_n_o-n_o ∫_ℳTT̅_ℳ] . § DEFORMED THERMAL 2 AND HOLOGRAPHY §.§ OEE in a deformed thermal 2 We now investigate the behavior of the deformed 2 at a finite temperature 1/β. The corresponding manifold ℳ for this configuration is given by an infinitely long cylinder of circumference β with the Euclidean time direction compactified by the periodic identification τ∼τ+β. This cylindrical manifold ℳ may be described by the complex coordinates <cit.> w=x+iτ , w̅=x-iτ , with the spatial coordinate x∈ (-∞,∞) and the time coordinate τ∈ (0,β). The cylinder ℳ may be further expressed in terms of the complex plane ℂ through the following conformal map <cit.> z=e^2π w/β , z̅=e^2πw̅/β , where (z, z̅) represent the coordinates on the complex plane. The transformation of the stress tensors under the conformal map described in <ref> is given as T(w)=T(z)-π^2c/6β^2 , T̅(w̅)=T̅(z̅)-π^2c/6β^2 . The relations in <ref> may be utilized to arrive at T(w)T̅(w̅)_ℳ=(π^2c/6β^2)^2, where we have used the fact that T(z)_ℂ=T̅(z̅)_ℂ=0 for the vacuum state of an undeformed 2 described by the complex plane. In the following subsections, we utilize <ref> to compute the first order correction in μ to the OEE in a finite temperature deformed 2 for two disjoint intervals, two adjacent intervals and a single interval. §.§.§ Two disjoint intervals We begin with the bipartite mixed state configuration of two disjoint spatial intervals A=[x_1,x_2] and B=[x_3,x_4] in a deformed 2 at a finite temperature 1/β, defined on the cylindrical manifold ℳ (x_1<x_2<x_3<x_4). Note that the intervals may also be represented as A=[w_1,w_2] and B=[w_3,w_4] with τ=0 [cf. <ref>]. The value of TT̅_ℳ_n_o on the replica manifold ℳ_n_o may be computed by insertion of the operator into the appropriate four point twist correlator as follows <cit.> ∫_ℳ_n_oTT̅_ℳ_n_o = ∑_k=1^n_o∫_ℳT_k(w)T̅_k(w̅)σ_n_o(w_1, w̅_1)σ̅_n_o(w_2, w̅_2)σ̅_n_o(w_3, w̅_3)σ_n_o(w_4, w̅_4)_ℳ/σ_n_o(w_1, w̅_1)σ̅_n_o(w_2, w̅_2)σ̅_n_o(w_3, w̅_3)σ_n_o(w_4, w̅_4)_ℳ = ∫_ℳ1/n_oT^(n_o)(w)T̅^(n_o)(w̅)σ_n_o(w_1, w̅_1)σ̅_n_o(w_2, w̅_2)σ̅_n_o(w_3, w̅_3)σ_n_0(w_4, w̅_4)_ℳ/σ_n_o(w_1, w̅_1)σ̅_n_o(w_2, w̅_2)σ̅_n_o(w_3, w̅_3)σ_n_o(w_4, w̅_4)_ℳ . Here T_k(w),T̅_k(w̅) are the stress tensors of the undeformed 2 on the k^th sheet of the Riemann surface ℳ_n_o, while T^(n_o)(w),T̅^(n_o)(w̅) represent the stress tensors on ℳ_n_o <cit.>. σ_n_o(w_i,w̅_i),σ̅_n_o(w_i,w̅_i) represent the twist operators located at the end points w_i of the intervals. An identity described in <cit.> has been used to derive the last line of <ref>. The relation in <ref> may now be utilized to transform the stress tensors from the cylindrical manifold to the complex plane. The following Ward identities are then employed to express the correlation functions involving the stress tensors in terms of the twist correlators on the complex plane T^(n_o)(z)𝒪_1(z_1,z̅_1)…𝒪_m(z_m,z̅_m)_ℂ = ∑_j=1^m(h_j/(z-z_j)^2+1/(z-z_j)∂_z_j) 𝒪_1(z_1,z̅_1)…𝒪_m(z_m,z̅_m)_ℂ , T̅^(n_o)(z̅)𝒪_1(z_1,z̅_1)…𝒪_m(z_m,z̅_m)_ℂ = ∑_j=1^m(h̅_j/(z̅-z̅_j)^2+1/(z̅-z̅_j)∂_z̅_j) 𝒪_1(z_1,z̅_1)…𝒪_m(z_m,z̅_m)_ℂ , where 𝒪_is represent arbitrary primary operators with conformal dimensions (h_i ,h̅_i). Utilizing <ref>, we may now express the expectation value in <ref> as ∫_ℳ_n_oTT̅_ℳ_n_o = 1/n_o∫_ℳ1/σ_n_o(z_1, z̅_1)σ̅_n_o(z_2, z̅_2)σ̅_n_o(z_3, z̅_3)σ_n_o(z_4, z̅_4)_ℂ ×[ -π^2 c n_o/6 β^2+(2 π z/β)^2 ∑_j=1^4(h_j/(z-z_j)^2+1/(z-z_j)∂_z_j) ] ×[ -π^2 c n_o/6 β^2+(2 πz̅/β)^2 ∑_k=1^4(h̅_k/(z̅-z̅_k)^2+1/(z̅-z̅_k)∂_z̅_k) ] ×σ_n_o(z_1, z̅_1)σ̅_n_o(z_2, z̅_2)σ̅_n_o(z_3, z̅_3)σ_n_o(z_4, z̅_4)_ℂ , where h_i=h̅_i=h_n_o (i=1,2,3,4) [see <ref>]. The four point twist correlator in <ref> for two disjoint intervals in proximity described by the t channel is given by <cit.> σ_n_o(z_1,z̅_1)σ̅_n_o(z_2,z̅_2) σ̅_n_o(z_3,z̅_3)σ_n_o(z_4,z̅_4)_ℂ≈z_14z_23^-4 h_n_o(1+√(η)/1-√(η))^-h_n_o^(2)(1+√(η̅)/1-√(η̅))^-h̅_n_o^(2). The conformal dimensions h_n_o, h_n_o^(2) and h̅_n_o^(2) in <ref> are given in <ref>. We have defined the cross ratio η:=z_12 z_34/z_13 z_24 where z_ij≡ z_i-z_j. We are now in a position to obtain the first order correction due to μ in the OEE of two disjoint intervals in a deformed finite temperature 2 by substituting <ref> into <ref> as follows δ S_o(A:B) = -μ c^2 π ^4 √(η)/18β^4 z_21 z_32 z_41 z_43∫_ℳ z^2 [z_32 z_42 [z_31 (2z-3z_1+z_4)√(η)+z_43 (z-z_1)]/(z-z_1)^2. +z_31 z_41 [z_42 (2z-3z_2+z_3)√(η)-z_43 (z-z_2)]/(z-z_2)^2 -z_42 z_41 [z_31(2z+z_2-3z_3) √(η)-z_21(z-z_3)]/(z-z_3)^2 . -z_31 z_32 [z_42 (2z+z_1-3z_4)√(η)+z_21 (z-z_4)]/(z-z_4)^2]+h.c. The detailed derivation of the definite integrals in <ref> has been provided in <ref>. These results may be used to arrive at δ S_o (A:B) = μ c^2 π^3 /36 β^2[ {( √(z_42 z_43/z_21 z_31)+1 ) z_1+z_4 }/ z_41log[ z_1/z_2] . . + (√(z_21 z_43/z_31 z_42)-2) (z_1 z_2-z_3 z_4) /z_32 z_41log[ z_2/z_3] + { z_1 - (√(z_21 z_31/z_42 z_43)-1 ) z_4 }/ z_41log[ z_3/z_4] + h.c. ]. We may now substitute z_i = z̅_i = e^2π x_i/β (at τ_i=0) into <ref> to finally obtain δ S_o (A:B) = -μ c^2 π^4/9 β ^3√(sinh(π x_21/β) sinh(π x_43/β)/sinh(π x_31/β) sinh(π x_42/β))[x_21(π x_21/β). . -x_32(π x_32/β) - x_41(π x_41/β)+x_43(π x_43/β)] -μ c^2 π^4/9 β ^3[x_32(π x_32/β)+x_41(π x_41/β)], where x_ij≡ x_i-x_j. §.§.§ Two adjacent intervals We now turn our attention to the bipartite mixed state configuration of two adjacent intervals A=[x_1,x_2] and B=[x_2,x_3] in a deformed 2 at a finite temperature 1/β (x_1<x_2<x_3). As earlier the intervals may be expressed as A=[w_1,w_2] and B=[w_2,w_3] with τ=0. The value of TT̅_ℳ_n_o for two adjacent intervals may be evaluated in a manner similar to that of two disjoint intervals as follows ∫_ℳ_n_oTT̅_ℳ_n_o = ∫_ℳ1/n_oT^(n_o)(w)T̅^(n_o)(w̅)σ_n_o(w_1, w̅_1)σ̅^2_n_o(w_2, w̅_2)σ_n_o(w_3, w̅_3)_ℳ/σ_n_o(w_1, w̅_1)σ̅^2_n_o(w_2, w̅_2)σ_n_o(w_3, w̅_3)_ℳ . As before the relations in <ref> may be utilized to express the expectation value in <ref> as follows ∫_ℳ_n_oTT̅_ℳ_n_o = 1/n_o∫_ℳ1/σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ_n_o(z_3, z̅_3)_ℂ ×[ -π^2 c n_o/6 β^2+(2 π z/β)^2 ∑_j=1^3(h_j/(z-z_j)^2+1/(z-z_j)∂_z_j) ] ×[ -π^2 c n_o/6 β^2+(2 πz̅/β)^2 ∑_k=1^3(h̅_k/(z̅-z̅_k)^2+1/(z̅-z̅_k)∂_z̅_k) ] ×σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ_n_o(z_3, z̅_3)_ℂ . In <ref> we have h_1=h_3=h_n_o,h_2=h^(2)_n_o with h̅_i=h_i (i=1,2,3) [see <ref>]. The three point twist correlator in <ref> is given by <cit.> σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ_n_o(z_3, z̅_3)_ℂ =𝒞_σ_n_oσ̅_n_o^2σ_n_o/( z^h^(2)_n_o_12z^h^(2)_n_o_23z^2h_n_o-h^(2)_n_o_13) ( z̅^h̅^(2)_n_o_12z̅^h̅^(2)_n_o_23z̅^2h̅_n_o-h̅^(2)_n_o_13) , where 𝒞_σ_n_eσ̅_n_e^2σ_n_e is the relevant OPE coefficient. The first order correction due to μ in the OEE of two adjacent intervals in a deformed thermal 2 may now be obtained by substituting <ref> into <ref> as follows δ S_o (A:B) = -μ c^2π^4/18 β^4∫_ℳz^2[ 1/(z-z_1)^2+1/(z-z_2)^2+1/(z-z_3)^2. . +(-3 z+z_1+z_2+z_3) /(z-z_1) (z-z_2) (z-z_3) + h.c. ]. The technical details of the definite integrals in <ref> have been included in <ref>. The correction to the OEE may then be expressed as δ S_o (A:B) = -μ c^2π^3/36β^2[(z_1^2-z_2 z_3) log(z_1/z_2)/z_12 z_13+(z_1 z_2-z_3^2) log(z_2/z_3)/z_23z_13 + h.c. ] . As earlier we may now restore the x coordinates by inserting z_i = z̅_i = e^2π x_i/β (at τ_i=0) into <ref> to arrive at δ S_o (A:B) = - (μ c^2π^4/36β^3) x_21cosh(2 π x_21/β)+x_32cosh(2 π x_32/β) - x_31cosh(2 π x_31/β) /sinh(π x_21/β) sinh(π x_32/β) sinh(π x_31/β) . §.§.§ A single interval We finally focus on the case of a single interval A=[-ℓ,0] in a thermal deformed 2 (ℓ>0). To this end it is required to consider two auxiliary intervals B_1=[-L, -ℓ] and B_2=[0,L] on either side of the interval A with B≡ B_1∪ B_2 (L≫ℓ) <cit.>. The intervals may be equivalently represented by the coordinates B_1=[x_1,x_2], A=[x_2,x_3] and B_2=[x_3,x_4], with x_1=-L,x_2=-ℓ,x_3=0,x_4=L and x_1<x_2<x_3<x_4. As before the intervals may also be characterized as B_1=[w_1,w_2], A=[w_2,w_3] and B_2=[w_3,w_4] with τ=0. The OEE for the mixed state configuration of the single interval A is then evaluated by implementing the bipartite limit L→∞ (B→ A^c) subsequent to the replica limit n_o→ 1 <cit.>. For the configuration described above, the integral of TT̅_ℳ_n_o on the replica manifold is given by ∫_ℳ_n_oTT̅_ℳ_n_o = ∫_ℳ1/n_oT^(n_o)(w)T̅^(n_o)(w̅)σ_n_o(w_1, w̅_1)σ̅^2_n_o(w_2, w̅_2)σ^2_n_o(w_3, w̅_3)σ̅_n_o(w_4, w̅_4)/σ_n_o(w_1, w̅_1)σ̅^2_n_o(w_2, w̅_2)σ^2_n_o(w_3, w̅_3)σ̅_n_o(w_4, w̅_4) . As earlier <ref> may be simplified by utilizing <ref> as follows ∫_ℳ_n_oTT̅_ℳ_n_o = 1/n_o∫_ℳ1/σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ^2_n_o(z_3, z̅_3)σ̅_n_o(z_4, z̅_4) ×[ -π^2 c n_o/6 β^2+(2 π z/β)^2 ∑_j=1^4(h_j/(z-z_j)^2+1/(z-z_j)∂_z_j) ] ×[ -π^2 c n_o/6 β^2+(2 πz̅/β)^2 ∑_k=1^4(h̅_k/(z̅-z̅_k)^2+1/(z̅-z̅_k)∂_z̅_k) ] ×σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ^2_n_o(z_3, z̅_3)σ̅_n_o(z_4, z̅_4)_𝒞 , where h_1=h_4=h_n_o,h_2=h_3=h^(2)_n_o with h̅_i=h_i (i=1,2,3,4) [see <ref>]. The four point twist correlator in <ref> is given by <cit.> σ_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2)σ^2_n_o(z_3, z̅_3)σ̅_n_o(z_4, z̅_4) = c_n_oc^(2)_n_o(ℱ_n_o(η)/z^2h_n_o_14 z^2h^(2)_n_o_23η^h^(2)_n_o) (ℱ̅_n_o(η̅)/z̅^2h̅_n_o_14z̅^2h̅^(2)_n_o_23η̅^h̅^(2)_n_o) , where c_n_o and c_n_o^(2) are the normalization constants. The functions ℱ_n_o(η) and ℱ̅_n_o(η̅) in <ref> satisfy the following OPE limits ℱ_n_o(1)ℱ̅_n_o(1)=1 , ℱ_n_o(0)ℱ̅_n_o(0)=𝒞_σ_n_oσ̅_n_o^2σ̅_n_o/c_n_o^(2) , where 𝒞_σ_n_oσ̅_n_o^2σ̅_n_o represents the relevant OPE coefficient. As earlier <ref> may be substituted into <ref> to arrive at δ S_o (A:B)= -μ c^2 π^4/18 β^4∫_ℳ[ ∑_j=1^4z^2/(z-z_j)^2 -∑_j=1^4z^2/(z-z_j)∂_z_j(log[z^2_23 z^2_14 η f(η)]) + h.c. ]. The functions f(η) and f̅(η̅) introduced in <ref> are defined as follows lim_n_o→ 1 [ℱ_n_o (η)]^1/1-n_o = [f(η)]^c/12 , lim_n_o→ 1 [ℱ̅_n_o (η̅)]^1/1-n_o = [f̅(η̅)]^c/12 . The first order correction due to μ in the OEE of a single interval in a deformed 2 at a finite temperature 1/β may now be computed from <ref> by reverting back to the coordinates involving ℓ, L and implementing the bipartite limit L→∞ as follows δ S_o (A:A^c) = -2 μ c^2 π^4 ℓ/9β^3(1/ e^2 πℓ/β -1 - e^-2 πℓ/β f' [ e^-2 πℓ/β]/2 f [ e^-2 πℓ/β]) - lim_L→∞[ μ c^2 π ^4 L/9β^3( 2 π L/β) ] . The technical details of the integrals necessary to arrive at <ref> from <ref> have been provided in <ref>. Note that the second term on the right hand side of <ref> represents the divergent part of the OEE for a single interval. §.§ Holographic OEE in a deformed thermal 2 We now turn our attention to the holographic description of the OEE as advanced in <cit.> for various bipartite mixed states in a deformed 2 at a finite temperature 1/β. The holographic dual of a deformed 2 is described by the bulk 3 geometry corresponding to the undeformed 2 with a finite cut-off radius r_c given as follows <cit.> r_c=√(6 R^4/π cμ)=R^2/ϵ . In <ref> μ is the deformation parameter, c is the central charge, ϵ is the UV cut-off of the field theory, and R is the 3 radius. For a deformed 2 at a finite temperature 1/β, the corresponding bulk dual is characterized by a BTZ black hole <cit.> with a finite cut-off, represented by <cit.> ds^2=-r^2-r_h^2/R^2dt^2+R^2/r^2-r_h^2dr^2+r^2dx̃^2 . In the above metric, the horizon of the black hole is located at r=r_h, with β=2π R^2/r_h as the inverse temperature of the black hole and the dual 2. For simplicity from now onwards we set the radius R=1. The metric on the deformed 2, located at the cut-off radius r=r_c, is conformal to the bulk metric at r=r_c as follows <cit.> ds^2=-dt^2+dx̃^2/1-r_h^2/r_c^2≡ -dt^2+dx^2 , x=x̃(1-r_h^2/r_c^2)^-1/2, where x represents the spatial coordinate on the deformed 2. To compute the EWCS, we embed the BTZ black hole described by <ref> in ℝ^2,2 as follows <cit.> ds^2 =η_ABdX^AdX^B =-dX^2_0-dX^2_1+dX^2_2+dX^2_3 , X^2=-1 . The metric in <ref> may then be described by these embedding coordinates as follows <cit.> X_0(t,r,x) =√(r^2/r_h^2-1)  sinh(2 π t/β), X_1(t,r,x) =r/r_hcosh(2 πx̃/β), X_2(t,r,x) =√(r^2/r_h^2-1)  cosh( 2 π t/β), X_3(t,r,x) =r/r_hsinh(2 πx̃/β). Note that for convenience the embedding coordinates in <ref> are parameterized in terms of the coordinate x described in <ref>. We also introduce a new coordinate u = 1/r to simplify later calculations, with u_c ≡ 1/r_c and u_h ≡ 1/r_h. We also note the Brown Henneaux formula G_N=3/(2c) described in <cit.>, which will be extensively used in later sections. In the following subsections we apply the methods described above to compute the holographic OEE from <ref> for two disjoint intervals, two adjacent intervals, and a single interval in a deformed thermal holographic 2. §.§.§ Two disjoint intervals We begin with the two disjoint spatial intervals A=[x_1,x_2] and B=[x_3,x_4] with x_1<x_2<x_3<x_4 as described in <ref>. The setup has been shown in <ref>. The EWCS involving the bulk points X(s_1),X(s_2),X(s_3),X(s_4) is given by <cit.> E_W = 1/4G_Ncosh ^-1( 1+√(u)/√(v)), where u=ξ^-1_12ξ^-1_34ξ^-1_13ξ^-1_24 , v=ξ^-1_14ξ^-1_23ξ^-1_13ξ^-1_24 , ξ^-1_ij=-X(s_i)· X(s_j) . The four points on the boundary may be expressed in the global coordinates as X(0,r_c,x_i) for i=1,2,3,4. The corresponding EWCS may then be computed from <ref> as E_W(A:B) =1/4G_Ncosh ^-1( √([ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_31/u_h^2) ] [ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_42/u_h^2) ] /[ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_32/u_h^2) ] [ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_41/u_h^2) ] ). + . √([ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_21/u_h^2) ] [ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_43/u_h^2) ] /[ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_32/u_h^2) ] [ u_c^2-u_h^2+u_h^2 cosh( √(u_h^2-u_c^2) x_41/u_h^2) ] ) ). To compare with the field theory computations in <ref>, we have to take the limit of small deformation parameter μ, corresponding to large cut-off radius r_c (or small u_c) [see <ref>]. Further we must consider the high temperature limit β≪ |x_ij|, as the dual cut-off geometry resembles a BTZ black hole only in the high temperature limit. Expanding <ref> for small u_c and β≪ |x_ij| we arrive at E_W(A:B) = 1/4G_Ncosh^-1[ 1 + 2 sinh( x_21/2u_h) sinh( x_43/2u_h) /sinh( x_32/2u_h) sinh( x_41/2u_h) ] - u_c^2/16G_N u_h^3√(sinh( x_21/2u_h) sinh( x_43/2u_h) /sinh( x_31/2u_h) sinh( x_42/2u_h) )[ x_21( x_21/2u_h) + x_43( x_43/2u_h) . . -x_32( x_32/2u_h) - x_41( x_41/2u_h) ] - u_c^2/32G_N u_h^2( √(sinh( x_31/2u_h) sinh( x_42/2u_h) /sinh( x_21/2u_h) sinh( x_43/2u_h) ). . ×[ ^2 ( x_31/2u_h) +^2 ( x_42/2u_h) -^2 ( x_32/2u_h) -^2 ( x_41/2u_h) ] . + √(sinh( x_21/2u_h) sinh( x_43/2u_h) /sinh( x_31/2u_h) sinh( x_42/2u_h) ) . ×[ ^2 ( x_21/2u_h) +^2 ( x_43/2u_h) -^2 ( x_32/2u_h) -^2 ( x_41/2u_h) ] ). The first term in <ref> is the EWCS between the two disjoint intervals for the corresponding undeformed 2. The rest of the terms (proportional to u_c^2 and thus to μ) describes the leading order corrections for the EWCS due to the deformation. The third term becomes negligible (compared to the second term) in the high temperature limit. The change in HEE for two disjoint intervals due to the deformation is given by <cit.> δ S(A∪ B) = - μ c^2 π ^4 /9β^3[ x_32( π x_32/β) + x_41( π x_41/β) ]. The change in holographic OEE for two disjoint intervals due to the deformation may now be computed by combining <ref> through <ref>, and is given by <ref>, where we have utilized the holographic dictionary to substitute G_N=3/(2c), u_h=β/(2π) and u_c^2 = π c μ /6. Interestingly our holographic result matches exactly with our earlier field theory computation in <ref>, in the large central charge limit together with small deformation parameter and high temperature limits, which serves as a strong consistency check for our holographic construction. §.§.§ Two adjacent intervals We now consider two adjacent intervals A=[x_1,x_2] and B=[x_2,x_3] with x_1<x_2<x_3 as described in <ref>. The configuration has been depicted in <ref>. The EWCS for the corresponding bulk points X(s_1),X(s_2),X(s_3) is given by <cit.> E_W=1/4 G_Ncosh ^-1(√(2)/√(v)), where v= ξ_13^-1/ξ_12^-1ξ_23^-1 , ξ_ij^-1=-X(s_i)· X(s_j) . As earlier the three points on the boundary may be expressed in the global coordinates as X(0,r_c,x_i) for i=1,2,3. The corresponding EWCS may then be computed from <ref> as E_W(A:B) = 1/4 G_Nlog[ 4 u_h sinh( x_21/2 u_h) sinh( x_32/2 u_h) / u_c sinh( x_31/2 u_h) ] - u_c^2/16 G_N u_h^3[ x_21( x_21/2 u_h) - x_31( x_31/2 u_h) + x_32( x_32/2 u_h) ] + u_c^2/16 G_N u_h^2[ ^2 ( x_21/2 u_h) - ^2 ( x_31/2 u_h) + ^2 ( x_32/2 u_h) ]. Similar to the disjoint configuration, the first term in <ref> is the EWCS between the two adjacent intervals for the corresponding undeformed 2. The rest of the terms (proportional to u_c^2 and thus to μ) describes the leading order corrections for the EWCS due to the deformation. The third term becomes negligible (compared to the second term) in the high temperature limit. The change in HEE for two adjacent intervals due to the deformation is given by <cit.> δ S(A∪ B)= - ( μ c^2 π^4 /9 β^3) x_31(π x_31/β) . The change in holographic OEE for two adjacent intervals due to the deformation may now be obtained from <ref>, and is described by <ref>, where as earlier we have used the holographic dictionary. Once again we find exact agreement between our holographic and field theory results (in the large central charge limit, along with small deformation parameter and high temperature limits), which substantiates our holographic construction. §.§.§ A single interval Finally we consider the case of a single interval A=[-ℓ,0] in a thermal deformed holographic 2 (ℓ>0). As described in <ref> this necessitates the introduction of two large but finite auxiliary intervals B_1=[-L, -ℓ] and B_2=[0,L] sandwiching the interval A with B≡ B_1∪ B_2 (L≫ℓ) <cit.>. The situation has been outlined in <ref>. We then compute the holographic OEE for this modified configuration, and finally take the bipartite limit B→ A^c (implemented through L→∞) to obtain the desired OEE for the original configuration of the single interval A. The EWCS between the intervals A and B=B_1∪ B_2 may be computed from the following relation <cit.> Ẽ_W(A:B)=E_W(A:B_1)+E_W(A:B_2) , where Ẽ_W(A:B) denotes an upper bound on the EWCS between the intervals A and B. All subsequent computations involving <ref> should be interpreted accordingly. Note that each term on the right hand side of <ref> represents the EWCS of two adjacent intervals which has already been computed in <ref>. The corrections to these terms may thus be read off from <ref> as follows δ E_W(A:B_1) = - u_c^2/16 G_N u_h^3[ ℓ( ℓ/2 u_h) + (L-ℓ) ( L - ℓ/2 u_h) - L ( L/2 u_h) ], and δ E_W(A:B_2) = - u_c^2/16 G_N u_h^3[ ℓ( ℓ/2 u_h) + L ( L/2 u_h) -(L+ℓ) ( L+ℓ/2 u_h) ], where we have already taken the limits of small deformation parameter and high temperature. The correction to the HEE for a single interval is given as follows <cit.> δ S (A∪ A^c)= - ( 2 μ c^2 π ^4 L /9 β ^3) (2 π L/β), where the bipartite limit has already been implemented. The correction to holographic OEE for a single interval due to the deformation may then be computed from <ref> through <ref> on effecting the bipartite limit L→∞ as follows δ S_o (A : A^c) = -μ c^2 π^4 ℓ/9β^3[ ( πℓ/β) - 1 ] = -2 μ c^2 π^4 ℓ/9β^3(1/ e^2 πℓ/β -1 ), where we have utilized the holographic dictionary as earlier. Note that on taking the high temperature limit (β→ 0), <ref> reduces (the second part of the first term becomes negligible as e^-2 πℓ/β→ 0) exactly to <ref>. This once again serves as a robust consistency check for our holographic construction. § DEFORMED FINITE SIZE 2 AND HOLOGRAPHY §.§ OEE in a deformed finite size 2 In this section we follow a similar prescription as in <ref> to formulate a perturbative expansion for the OEE in a deformed finite size 2 of length L at zero temperature. For this setup, the corresponding manifold ℳ describes an infinitely long cylinder of circumference L with the length direction periodically compactified by the relation x∼ x+L <cit.>. The cylindrical manifold ℳ for this configuration may be represented by the complex coordinates described in <ref> with the spatial coordinate x∈ (0,L) and the time coordinate τ∈ (-∞,∞) <cit.>. The cylinder ℳ may be further described on the complex plane ℂ through the following conformal map <cit.> z=e^- 2π i w/L , z̅=e^2π i w̅/L , where (z, z̅) are the coordinates on the complex plane. The relations in <ref> remain valid with β effectively replaced by iL. With these modifications, the expressions in <ref> may now be applied to compute the OEE in a deformed finite size 2 at zero temperature. §.§.§ Two disjoint intervals As earlier we start with the mixed state of two disjoint spatial intervals A=[x_1,x_2] and B=[x_3,x_4] in a deformed finite size 2 of length L at zero temperature, defined on the cylindrical manifold ℳ described above (x_1<x_2<x_3<x_4). The first order correction in the OEE of two disjoint intervals in a deformed finite size 2 may be obtained by substituting <ref> along with <ref> (β replaced by iL) into <ref> as follows δ S_o(A:B) = -μ c^2 π ^4 /18L^4(z_1-z_3)^2(z_2-z_4)^2(η -1)√(η) ×∫ _ℳ z^2 [ (z_2-z_3)(z_2-z_4)((z-z_1)(z_3-z_4)+(z_1-z_3)(2z-3z_1+z_4) √(η))/(z-z_1)^2. . + (z_1-z_3)(z_1-z_4)(-((z-z_2)(z_3-z_4))+(2z-3z_2+z_3)(z_2-z_4)√(η))/(z-z_2)^2. . - (z_1-z_4)(z_2-z_4)((z_1-z_2)(-z+z_3)+(2z+z_2-3z_3)(z_1-z_3)√(η))/(z-z_3)^2. . + (z_1-z_3)(z_3-z_2)((z_1-z_2)(z-z_4)+(2z+z_1-3z4_)(z_2-z_4)√(η))/(z-z_4)^2]. We now substitute z → e^-2π i (x+iτ)/L into <ref> and integrate the resulting expression with respect to x to arrive at δ S_o(A:B) = iμ c^2π ^3/36L^3 √(η)∫ dτ[ z_1√(η)/e^2π (-ix+τ)/L-z_1 + z_2√(η)/e^2π (-ix+τ)/L-z_2 + z_3√(η)/e^2π (-ix+τ)/L-z_3. . + z_4√(η)/e^2π (-ix+τ)/L-z_4 + (z_1(z_3-z_4)+(z_1-z_3)(z_1+z_4) √(η))log [e^2π (-ix+τ)/L-z_1]/(z_1-z_3)(z_1-z_4). . + (z_2(z_4-z_3)+(z_2+z_3)(z_2-z_4) √(η))log [e^2π (-ix+τ)/L-z_2]/(z_2-z_3)(z_2-z_4). . + ((z_2-z_1)z_3+(z_1-z_3)(z_2+z_3) √(η))log [e^2π (-ix+τ)/L-z_3]/(z_1-z_3)(z_3-z_2). . + ((z_2-z_1)z_4+(z_1+z_4)(z_4-z_2) √(η))log [e^2π (-ix+τ)/L-z_4]/(z_1-z_4)(z_4-z_2)]. We observe that the first four terms on the right hand side of <ref> readily vanish on inserting the limits of integration x=0 and x=L. Since we have considered the system on a constant time slice, we may take τ_j (j=1,2,3,4) to be zero for all boundary points, and the contributions of the logarithmic functions become zero identically. Thus it is observed that the resultant integrand for the τ integration in <ref> vanishes. Hence the first order correction to the OEE vanishes. §.§.§ Two adjacent intervals We now focus on the bipartite mixed state of two adjacent intervals A=[x_1,x_2] and B=[x_2,x_3] in a deformed finite size 2 of length L at zero temperature, defined on the cylindrical manifold ℳ described by <ref> (x_1<x_2<x_3). For this case, <ref> may still be employed along with the relation described in <ref>, effectively replacing β by iL. The first order correction in OEE due to μ for two adjacent intervals is then given by δ S_o(A:B) = -μ c^2π^4/18L^4∫_ℳz^2/(z-z_1)^2(z-z_2)^2(z-z_3)^2 ×[ z_2^2z_3^2-z_1z_2z_3(z_2+z_3)+z_1^2(z_2^2-z_2z_3+z_3^2) . . +z^2(z_1^2+z_2^2-z_2z_3+z_3^2-z_1(z_2+z_3))-z(z_1^2(z_2+z_3) . . +z_2z_3(z_2+z_3)+z_1(z_2^2-6z_2z_3+z_3^2)) ]. Next we replace z → e^-2π i (x+iτ)/L into <ref> and subsequently integrate with respect to x to obtain δ S_o(A:B) = iμ c^2π^3/36L^3∫ dτ[ z_1/e^2π (-ix+τ)/L-z_1+z_2/e^2π (-ix+τ)/L-z_2+z_3/e^2π (-ix+τ)/L-z_3. . +(z_1^2-z_2z_3)log [e^2π (-ix+τ)/L-z_1]/(z_1-z_2)(z_1-z_3) +(z_2^2-z_1z_3)log [e^2π (-ix+τ)/L-z_2]/(z_2-z_1)(z_2-z_3). . +(z_3^2-z_2z_1)log [e^2π (-ix+τ)/L-z_3]/(z_1-z_3)(z_2-z_3)]. Similar to the disjoint case, the first three terms on the right hand side of <ref> readily vanish when the limits of integration x=0 and x=L are inserted. As earlier, for a constant time slice τ_j=0 (j=1,2,3), the logarithmic functions also contribute nothing to the definite integral. The resulting integrand for the τ integration in <ref> thus vanishes. Hence the corresponding first order correction in the OEE of two adjacent intervals turns out to be zero. §.§.§ A single interval Finally we turn our attention to the bipartite mixed state configuration of a single interval A=[x_1,x_2] in a deformed finite size 2 of length L at zero temperature, defined on the cylindrical manifold ℳ given in <ref> (x_1<x_2). The construction of the relevant partially transposed reduced density matrix for this configuration is described in <cit.>. Once again we may utilize <ref> with only two points z_1 and z_2, subject to <ref> (with the effect of iL replacing β), and a two point twist correlator as mentioned below in <ref>. We have expressed the modified version of <ref> as applicable for the system under consideration for convenience of the reader as follows ∫_ℳ_n_oTT̅_ℳ_n_o = 1/n_o∫_ℳ1/σ^2_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2) ×[ π^2 c n_o/6 L^2 - (2 π z/L)^2 ∑_j=1^2(h_j/(z-z_j)^2+1/(z-z_j)∂_z_j) ] ×[ π^2 c n_o/6 L^2 - (2 πz̅/L)^2 ∑_k=1^2(h̅_k/(z̅-z̅_k)^2+1/(z̅-z̅_k)∂_z̅_k) ] ×σ^2_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2) _𝒞 , where h_1=h_2=h^(2)_n_o with h̅_i=h_i (i=1,2) [see <ref>]. The corresponding two point twist correlator for this configuration is given by <cit.> σ^2_n_o(z_1, z̅_1)σ̅^2_n_o(z_2, z̅_2) = 𝒞_12/| z_1-z_2 |^2h_n_o , where 𝒞_12 is the relevant normalization constant. Following a similar procedure like the earlier cases, the first order correction for the OEE of this setup may be given as follows δ S_o(A:B)= -μ c^2π^4/18L^4 (z_1-z_2)^2 ∫_ℳz^2/(z-z_1)^2(z-z_2)^2 . We then obtain the following expression by substituting z → e^-2π i (x+iτ)/L into <ref> and integrating with respect to x δ S_o(A:B) = i μ c^2π^3/36L^3∫ dτ[ z_1/ e^2π (-ix+τ)/L-z_1+z_2/ e^2π (-ix+τ)/L-z_2. . +z_1+z_2/z_1-z_2( log[ e^2π (-ix+τ)/L-z_1 ]-log[ e^2π (-ix+τ)/L-z_2 ] ) ]. Like the previous cases, we observe that the first two terms in <ref> vanish on implementation of the limits of integration x=0 and x=L. As the system under consideration is on a constant time slice τ_j=0 (j=1,2), once again the terms containing the logarithmic functions also vanish. Again the resulting integrand for the τ integration in <ref> vanishes, indicating the vanishing of the first order corrections of the OEE as earlier. §.§ Holographic OEE in a deformed finite size 2 The bulk dual of a deformed finite size 2 of length L at zero temperature is represented by a finite cut-off 3 geometry expressed in global coordinates as follows <cit.> ds^2=R^2 ( -cosh^2ρ dτ^2 +sinh^2 ρ dϕ^2 + dρ^2 ), where ϕ=2π x/L. As earlier we embed this 3 geometry in ℝ^2,2 as follows <cit.> ds^2=η_ABdX^A dX^B =-dX^2_0-dX^2_1+dX^2_2+dX^2_3 , X^2=-1 . The metric in <ref> may be expressed in terms of the embedding coordinates introduced in <ref> as follows X_0(τ,ϕ,ρ) = R coshρsinτ, X_1(τ,ϕ,ρ) = R coshρcosτ, X_2(τ,ϕ,ρ) = R sinhρcosϕ, X_3(τ,ϕ,ρ) = R sinhρsinϕ. The finite cut-off of the 3 geometry is located at ρ=ρ_c, where coshρ_c = √(3L^2/2 μ c π^3) . With the UV cut-off of the field theory given by ϵ = √(μ c π / 6) [see <ref>], the relation in <ref> may be rewritten as coshρ_c=L/2 πϵ . §.§.§ Two disjoint intervals We begin with two disjoint spatial intervals A=[x_1,x_2] and B=[x_3,x_4] on a cylindrical manifold ℳ as detailed in <ref> (x_1<x_2<x_3<x_4). Note that the EWCS involving arbitrary bulk points X(s_1),X(s_2),X(s_3),X(s_4) for a deformed finite size 2 is described by <cit.> E_W =1/4G_Ncosh ^-1( 1+√(u)/√(v)), where u=ξ^-1_12ξ^-1_34ξ^-1_13ξ^-1_24 , v=ξ^-1_14ξ^-1_23ξ^-1_13ξ^-1_24 , ξ^-1_ij=-X(s_i)· X(s_j) . The end points of the two disjoint intervals under consideration on the boundary may be represented by the embedding coordinates as X(0,ϕ_i,ρ_c) for i=1,2,3,4, where ϕ_1<ϕ_2<ϕ_3<ϕ_4 (Note that ϕ_i=2π x_i/L). The corresponding EWCS may then be computed from <ref> as E_W(A:B) = 1/4G_Ncosh^-1 ( √([ 1+ sin^2( π x_31/L) sinh^2ρ_c ] [ 1+ sin^2( π x_42/L) sinh^2ρ_c ]/[ 1+ sin^2( π x_32/L) sinh^2ρ_c ] [ 1+ sin^2( π x_41/L) sinh^2ρ_c ]). . + √([ 1+ sin^2( π x_21/L) sinh^2ρ_c ] [ 1+ sin^2( π x_43/L) sinh^2ρ_c ]/[ 1+ sin^2( π x_32/L) sinh^2ρ_c ] [ 1+ sin^2( π x_41/L) sinh^2ρ_c ]) ). To extract the desired first order corrections, we now expand <ref> in small (1/coshρ_c) as follows E_W(A:B)= 1/4G_Ncosh^-1[ 1 + 2sin( π x_21/L) sin( π x_43/L) /sin( π x_32/L) sin( π x_41/L)] +𝒪[ϵ^2 ], where we have utilized <ref> to substitute ϵ. The first term in <ref> is the EWCS between the two disjoint intervals for the corresponding undeformed 2. The rest of the terms characterizing the corrections for the EWCS due to the deformation are second order and higher in ϵ and thus negligible. The corresponding leading order corrections for the HEE due to the deformation has been shown to be zero <cit.>. Thus the leading order corrections to the holographic OEE of two disjoint intervals in a deformed finite size 2 is zero, which is in complete agreement with our corresponding field theory computations in the large central charge limit described in <ref>. §.§.§ Two adjacent intervals We now turn our attention to the case of two adjacent intervals A=[x_1,x_2] and B=[x_2,x_3] (x_1<x_2<x_3) as described in <ref>. The bulk description of the end points of the intervals A and B for a deformed finite size 2 is given by X(0,ϕ_i,ρ_c) for i=1,2,3, where ϕ_1<ϕ_2<ϕ_3 (ϕ_i=2π x_i/L). The EWCS for this configuration is described as follows <cit.> E_W=1/4 G_Ncosh^-1(√(2)/√(v)), where v= ξ_13^-1/ξ_12^-1ξ_23^-1 , ξ_ij^-1=-X(s_i)· X(s_j) . We now utilize <ref> to explicitly compute the EWCS as follows E_W (A:B) = 1/4G_Ncosh ^-1( √( 2 [cosh[2](ρ_c) - cos (2π x_21/L) sinh[2](ρ_c)] [cosh[2](ρ_c) - cos (2π x_32/L) sinh[2](ρ_c)] /cosh[2](ρ_c) - cos (2π x_31/L) sinh[2](ρ_c) )). We are now in a position to extract the leading order corrections to the EWCS from <ref> by expanding in small (1/coshρ_c) as follows E_W(A:B) = 1/4 G_Nlog[ ( 2L/πϵ) sin(π x_21/L) sin(π x_32/L)/sin(π x_31/L)] +𝒪[ϵ^2 ], where we have already substituted the relation in <ref>. As earlier the first term on the right hand side of <ref> describes the EWCS between the two adjacent intervals for the corresponding undeformed 2. Again the correction terms are second order and higher in ϵ and negligible. The leading order corrections of the HEE for this configuration due to the deformation has been demonstrated to be vanishing <cit.>. Hence the leading order corrections to the holographic OEE for this case vanishes, which once again is in conformity with our field theory results in the large central charge limit described in <ref>. §.§.§ A single interval The bulk representation of the end points of a single interval of length ℓ may be given by X(0,0,ρ_c) and X(0,δϕ,ρ_c), where δϕ=2πℓ/L. The EWCS for the given configuration (same as the HEE for a single interval) may be computed as E_W(A:A^c)=1/4G_N cosh ^-1[ 1 + 2 sinh ^2(ρ _c) sin ^2(πℓ/L)]. Once again <ref> may be expanded for small (1/coshρ_c) to obtain the following expression for the EWCS E_W(A:A^c)= 1/2 G_Nlog[ L/πϵsin(πℓ/L)] +𝒪[ϵ^2 ], where we have used <ref> to replace coshρ_c. Once again the first term of <ref> represents the EWCS of a single interval for the corresponding undeformed 2, while we have neglected the second and higher order correction terms in ϵ. The corresponding corrections for the HEE of a single interval has been shown to be zero <cit.>. Thus the leading order corrections to the holographic OEE for a single interval vanishes, demonstrating agreement with our field theory calculations in the large central charge limit detailed in <ref>. § SUMMARY AND CONCLUSIONS To summarize we have computed the OEE for different bipartite mixed state configurations in a deformed finite temperature 2 with a small deformation parameter μ. In this context we have developed a perturbative construction to compute the first order correction to the OEE for small deformation parameter through a suitable replica technique. This incorporates definite integrals of the expectation value of the operator over an n_o sheeted replica manifold. We have been able to express these expectation values in terms of appropriate twist field correlators for the configurations under consideration. Utilizing our perturbative construction we have subsequently computed the OEE for the mixed state configurations described by two disjoint intervals, two adjacent intervals, and a single interval in a deformed thermal 2. Following the above we have computed the corresponding EWCS in the dual bulk finite cut-off BTZ black hole geometry for the above configurations utilizing an embedding coordinate technique in the literature. Interestingly it was possible to demonstrate that the first order correction to the sum of the EWCS and the corresponding HEE matched exactly with the first order correction to the 2 replica technique results for the OEE in the large central charge and high temperature limit. This extends the holographic duality for the OEE proposed in the literature to deformed thermal 2s. Finally we have extended our perturbative construction to deformed finite size 2s at zero temperature. We have computed the first order corrections to the OEE for the configurations mentioned earlier in such 2s in the large central charge limit. In all the cases we have been able to show that the leading order corrections vanish in the appropriate limits. Quite interestingly it was possible to demonstrate that the first order corrections to the corresponding bulk EWCS in the dual cut-off BTZ geometry were also identically zero in a further validation of the extension of the holographic duality for the OEE in the literature to deformed finite size 2s at zero temperature. It will be instructive to develop similar constructions for other entanglement measures such as entanglement of purification, balanced partial entanglement, reflected entropy etc. for deformed 2s. Also a covariant framework for holographic entanglement in these theories along the lines of the HRT construction is an important open issue. These constitute exciting open problems for the future. § ACKNOWLEDGMENTS We would like to thank Lavish, Mir Afrasiar and Himanshu Chourasiya for valuable discussions. The work of Gautam Sengupta is supported in part by the Dr. Jag Mohan Garg Chair Professor position at the Indian Institute of Technology, Kanpur. The work of Saikat Biswas is supported by the Council of Scientific and Industrial Research (CSIR) of India under Grant No. 09/0092(12686)/2021-EMR-I. § THE INTEGRALS FOR THERMAL 2S The detailed derivation of the integrals appearing in <ref> has been provided in this appendix. Note that the corresponding domain of integration for all the configurations is the cylindrical manifold ℳ characterized by the complex coordinates (w, w̅) [see <ref>]. §.§ Two disjoint intervals The holomorphic part of the integral in <ref> may be written as - μ c^2 π^4 √(η)/18 β^4 z_21 z_32 z_41 z_43∫_ℳ d^2w (z^2) [ z_32 z_42 [z_31 (2z-3z_1+z_4)√(η)+z_43 (z-z_1)]/(z-z_1)^2 + z_31 z_41 [z_42 (2z-3z_2+z_3)√(η)-z_43 (z-z_2)]/(z-z_2)^2 -z_42 z_41 [z_31(2z+z_2-3z_3) √(η)-z_21(z-z_3)]/(z-z_3)^2 -z_31 z_32 [z_42 (2z+z_1-3z_4)√(η)+z_21 (z-z_4)]/(z-z_4)^2] = -μ c^2 π ^4 √(η)/18 β^4 z_21 z_32 z_41 z_43∫_0 ^∞ dx ∫_0 ^β dτ e^4 π (x+iτ)/β ×[ z_32 z_42 [z_31 (2e^2π(x+i τ)/β-3z_1+z_4)√(η)+z_43 (e^2π(x+i τ)/β-z_1)]/(e^2π(x+i τ)/β-z_1)^2. + z_31 z_41 [z_42 (2e^2π(x+i τ)/β-3z_2+z_3)√(η)-z_43 (e^2π(x+i τ)/β-z_2)]/(e^2π(x+i τ)/β-z_2)^2 -z_42 z_41 [z_31(2e^2π(x+i τ)/β+z_2-3z_3) √(η)-z_21(e^2π(x+i τ)/β-z_3)]/(e^2π(x+i τ)/β-z_3)^2 . -z_31 z_32 [z_42 (2e^2π(x+i τ)/β+z_1-3z_4)√(η)+z_21 (e^2π(x+i τ)/β-z_4)]/(e^2π(x+i τ)/β-z_4)^2]. The primitive function on indefinite integration with respect to τ turns out to be -i μ c^2 π^3 /36 β ^3 √(η)[ (√(η) z_1^2+(√(η)-1) z_1 (z_43)-√(η) z_3 z_4) log(-z_1+e^2 π (x+i τ )/β)/z_31 z_41. +(√(η) z_2^2+(√(η)-1) z_2 z_34-√(η) z_3 z_4) log(-z_2+e^2 π (x+i τ )/β)/z_32 z_42 - (√(η) z_1 z_2+(√(η)-1) z_1 z_3+z_3 (-√(η) z_2+z_2-√(η) z_3)) log(-z_3+e^2 π (x+i τ )/β)/z_31 z_32 . +(z_4 (-√(η) z_2+z_2+√(η) z_4)-z_1 (√(η) z_2-√(η) z_4+z_4)) log(-z_4+e^2 π (x+i τ )/β)/z_41 z_42] . Due to the presence of branch points, the logarithmic functions necessitate careful treatment while implementing the limits of integration τ=0 and τ=β. The following relation outlines the contribution due to a branch point at z=z_j <cit.> log(e^2π(x+i τ)/β-z_j) |_τ=0^τ=β = {[ 2 π i, for e^2π x/β > z_j ⇔ x > β/2πlog z_j ,; 0, otherwise. ]. The branch cuts of the logarithmic functions change the limits of the x integrals as follows ∫_-∞^∞ dx→∫_β/2πlog z_j^∞ dx, for j=1,2,3,4. We are now in a position to integrate over x and utilize the prescription described above to implement the limits of integration to arrive at μ c^2 π^3 /36 β^2( ( z_1 ( 1+ √(z_42 z_43/z_21 z_31) +z_4 ) )/z_41log[ z_1/z_2] +(-2+ √(z_21 z_43/z_31 z_42)) (z_1 z_2-z_3 z_4)/z_32 z_41log[ z_2/z_3] . . + ( z_1 +( 1+ √(z_12 z_31/z_42 z_43) z_4 ) )/z_41log[ z_3/z_4] ). The anti holomorphic part of the integral in <ref> follows a similar analysis and produces the same result as the holomorphic part. §.§ Two adjacent intervals The holomorphic part of the integral in <ref> may be written as ∫_ℳ  z^2 [ 1/(z-z_1)^2+1/(z-z_2)^2 +1/(z-z_3)^2+(-3 z+z_1+z_2+z_3) /(z-z_1) (z-z_2) (z-z_3)] =∫_-∞^∞dx∫_0^βdτ e^4 π (x+i τ )/β[ 1/(e^2 π (x+i τ )/β-z_1)^2+1/(e^2 π (x+i τ )/β-z_2)^2+1/(e^2 π (x+i τ )/β-z_3)^2. . +z_1+z_2+z_3-3 e^2 π (x+i τ )/β/(e^2 π (x+i τ )/β-z_1) (e^2 π (x+i τ )/β-z_2) (e^2 π (x+i τ )/β-z_3)] . We proceed in a similar manner to the disjoint configuration as described in <ref>. The indefinite integration with respect to τ leads to the following primitive function z_1/e^2 π (x+i τ )/β-z_1+z_2/e^2 π (x+i τ )/β-z_2+z_3/e^2 π (x+i τ )/β-z_3+(z_1^2-z_2 z_3)/(z_1-z_2) (z_1-z_3)log(e^2 π (x+i τ )/β-z_1) +(z_1 z_3-z_2^2)/(z_1-z_2)(z_2-z_3)log(e^2 π (x+i τ )/β-z_2)+(z_3^2-z_1 z_2) /(z_1-z_3)(z_2-z_3)log(e^2 π (x+i τ )/β-z_3). On implementation of the limits of integration τ = 0 and τ = β, the non logarithmic terms in the above expression vanish, while the contributions of the logarithmic terms follow the relation in <ref>. Due to the relation in <ref>, the limits of integration over x for each term in the integrand gets modified as follows ∫_-∞^∞ dx →∫_β/2πlog z_j^∞ dx, for j=1,2,3. The integration over x may now be performed to arrive at ∫_ℳ  z^2 [ 1/(z-z_1)^2+1/(z-z_2)^2 +1/(z-z_3)^2 +(-3 z+z_1+z_2+z_3) /(z-z_1) (z-z_2) (z-z_3)] = β ^2/2π[ (z_1^2-z_2 z_3) log(z_1/z_2)/z_12 z_13+(z_1 z_2-z_3^2) log(z_2/z_3)/z_23z_13]. As earlier, the anti holomorphic part of the integral gives result identical to the holomorphic part. §.§ A single interval The holomorphic part of the integral in <ref> is given by ∫_ℳ d^2 w ∑_j=1^4( z^2/(z-z_j)^2 - z^2/(z-z_j)∂_z_jlog[z^2_41z^2_23 η f(η) ] ) = ∫_0^∞ dx ∫_0^β dτ e^4 π (x +i τ)/β[ ∑_j=1^41/( e^2 π (x +i τ)/β-z_j)^2. + -4 e^4 π (x +i τ)/β -2z_3z_2 +z_1(-z_2+z_3-2z_4)+z_2z_4-z_3z_4+2e^2 π (x +i τ)/β(z_1+z_2+z_3+z_4)/( e^2 π (x +i τ)/β-z_1)( e^2 π (x +i τ)/β-z_2)( e^2 π (x +i τ)/β-z_3)( e^2 π (x +i τ)/β-z_4) . -z_21z_32z_41z_43f'(η)/( e^2 π (x +i τ)/β-z_1)( e^2 π (x +i τ)/β-z_2)( e^2 π (x +i τ)/β-z_3)( e^2 π (x +i τ)/β-z_4)z_31z_42 f(η)] . The indefinite integration over τ gives i β/2 π∑_j=1^4[ B_j+ C_j log( e^2 π (x+i τ)/β-z_j ) ] , where B_j=z_j/e^2 π (x+iτ)/β-z_j , j=1,2,3,4, and C_1, C_2, C_3 and C_4 are given as follows C_1 = -1/z_31^2[ z_31 (z_1^3+z_1^2(z_4-2z_3)+ z_1 z_2 (z_3-2 z_4)+ z_2 z_3 z_4 )/z_41 z_21 + z_1 z_32 z_43 f'(η)/z_42 f(η)] , C_2 = 1/z_42^2[ z_42 (z_2 ^3+ z_2 ^2 (z_3-2z_4)+z_1 z_2 (z_4-2z_3)+ z_1 z_3 z_4 )/z_32 z_21 + z_2 z_41 z_43 f'(η)/z_31 f(η)] , C_3 = -1/z_31^2 [ z_31 (z_3^3+(z_2-2 z_1) z_3^2+(z_1-2 z_2) z_4 z_3+z_1 z_2 z_4)/z_43z_32 +z_3z_21 z_41 f'(η)/z_42 f(η)] , C_4 = 1/z_42^2[ z_42 (z_4^3+(z_1-2 z_2) z_4^2+(z_2-2 z_1) z_3 z_4+z_1 z_2 z_3)/z_41 z_43 +z_4z_21 z_32 f'(η)/z_31 f(η)] . Once again the non logarithmic terms described by <ref> vanish on insertion of the limits of integration τ = 0 and τ= β, whereas the logarithmic terms in <ref> contribute according to the relation in <ref>, which modifies the limits of the integration over x as follows ∫_-∞^∞ dx →∫_β/2πlog z_j^∞ dx, j=1,2,3,4. The integration over x for the integrand in <ref> may now be performed with the modified limits described above to arrive at -β^2/2 π∑_j=1^4 C_j log z_j . The desired correction to the OEE of a single interval of length ℓ may now be obtained through the substitutions {z_1, z_2, z_3, z_4}→{e^-2π L/β, e^-2πℓ/β, 1, e^2π L/β} and subsequent implementation of the bipartite limit L→∞ as follows lim_L→∞∫_ℳ d^2 w ∑_j=1^4[z^2/(z-z_j)^2 - z^2/(z-z_j)∂_z_jlog[z^2_23 z_41^2 η f(η)]] = ℓβ(-1/( e^2 πℓ/β -1 ) + e^-2 πℓ/β f' [ e^-2 πℓ/β]/2 f [ e^-2 πℓ/β]) - lim_L→∞[ L β( 2 π L/β) ]. As before the anti holomorphic part of the integral produces identical result to the holomorphic part. utphys
http://arxiv.org/abs/2307.04542v1
20230710131729
Customizing Synthetic Data for Data-Free Student Learning
[ "Shiya Luo", "Defang Chen", "Can Wang" ]
cs.CV
[ "cs.CV" ]
Customizing Synthetic Data for Data-Free Student Learning Shiya Luo Zhejiang University Hangzhou, China [email protected] Defang Chen Zhejiang University Hangzhou, China [email protected] Can Wang Zhejiang University Hangzhou, China [email protected] August 12, 2023 =============================================================================================================================================================================================================== Data-free knowledge distillation (DFKD) aims to obtain a lightweight student model without original training data. Existing works generally synthesize data from the pre-trained teacher model to replace the original training data for student learning. To more effectively train the student model, the synthetic data shall be customized to the current student learning ability. However, this is ignored in the existing DFKD methods and thus negatively affects the student training. To address this issue, we propose Customizing Synthetic Data for Data-Free Student Learning (CSD) in this paper, which achieves adaptive data synthesis using a self-supervised augmented auxiliary task to estimate the student learning ability. Specifically, data synthesis is dynamically adjusted to enlarge the cross entropy between the labels and the predictions from the self-supervised augmented task, thus generating hard samples for the student model. The experiments on various datasets and teacher-student models show the effectiveness of our proposed method. Code is available at: https://github.com/luoshiya/CSDhttps://github.com/luoshiya/CSD data-free knowledge distillation, self-supervision, model compression § INTRODUCTION In recent years, convolutional neural networks (CNNs) have achieved remarkable success in various applications <cit.> with over-parameterized architectures. But its expensive storage and computational costs make model deployment on mobile devices difficult. Therefore, knowledge distillation (KD) <cit.> comes into play to compress models by transferring dark knowledge from a well-trained cumbersome teacher model to a lightweight student model. The prevailing knowledge distillation methods <cit.> depend on a strong premise that the original data utilized to train the teacher model is directly accessible for student training. However, this is not always the case in some practical scenarios where the data is not publicly shared due to privacy, intellectual property concerns or excessive data size etc. Data-free knowledge distillation (DFKD) <cit.> is thus proposed to solve this problem. Existing DFKD methods generally divide each training round into two stages: data synthesis and knowledge transfer. Two different approaches are proposed in the data synthesis stage: model inversion inputs the random Gaussian noise into the fixed teacher model and iteratively updates the input via the back-propagation from the teacher model <cit.>; generative reconstruction utilizes a generator network to learn a mapping from the low-dimensional noise to the desired high-dimensional data manifold close to the original training data <cit.>. In the knowledge transfer stage, the synthetic data from the previous stage is used to train the student model with the regular knowledge distillation procedure. As training progresses, easy samples bring little new knowledge and contribute less to the student learning. The key to improvement of the student learning ability is to provide the student with hard samples in training such that it can continuously acquire new knowledge. Some existing adversarial DFKD methods generate hard samples on which the student disagree with the teacher by enlarging the divergence between their prediction distribution <cit.> (see Fig. <ref>). However, the teacher has not been trained on such synthetic samples, and thus soft predictions for many samples are likely to be inaccurate. The student will experience minimal improvement, or even a decline, in its learning ability when attempting to imitate the teacher on those incorrect samples (as shown in Fig. <ref>). Furthermore, it is difficult to manually evaluate whether soft predictions of the teacher is correct. In this paper, we propose Customizing Synthetic Data for Data-Free Student Learning (CSD), which directly takes the current student learning ability as a reference to adaptively synthesize hard samples and the learning ability is estimated through a self-supervised augmented auxiliary task that learns the joint distribution of the classification task and the self-supervised rotation task. In this way, the capability of capturing semantic information can serve as a good indicator of the student learning ability, and the auxiliary task can effectively verify how well the student understand semantics <cit.>. An extra auxiliary classifier appended to the student feature extractor learns the self-supervised augmented auxiliary task in knowledge transfer stage and then estimates the current student learning ability as an evaluator in data synthesis stage by calculating the divergence between labels and predictions from the auxiliary task. In this way, we accurately generate hard samples relative to current student learning ability by enlarging this divergence in an adversarial way. Different from the traditional adversarial objective <cit.>, we use the student model itself rather than the pre-trained teacher model to estimate the sample difficulty of the synthetic data (see Fig. <ref>), which is more reliable for the student training and beneficial for the student performance improvement. As shown in Fig. <ref>, the student improves its learning ability with our hard samples and are not easily disturbed by the teacher misinformation. Our contributions are summarized as follows: * We propose a novel method to dynamically generate hard samples based on the current learning ability of the student in the data-free knowledge distillation scenario. * An auxiliary classifier is used to learn a self-supervised augmented task, and also acts as an evaluator to estimate the student learning ability for hard data synthesis. * We conduct extensive experiments on various datasets and teacher-student model architectures. Experimental results confirm the effectiveness of our method. § PROPOSED METHOD The overview of our proposed CSD framework is shown in Fig. <ref>. The framework consists of a fixed pre-trained teacher, a generator, a student and an auxiliary classifier appended to the student feature extractor. The generator and the auxiliary classifier are trained in an adversarial manner. In data synthesis stage, the generator would explore hard samples based on the student learning ability with the auxiliary classifier. In knowledge transfer stage, the auxiliary classifier tries to improve its own evaluating ability. Two stages are executed alternately until convergence. §.§ Data Synthesis In data synthesis stage, we follow CMI <cit.> to synthesize data x̃∈ℝ^H× W × C (H, W, C denote the height, width and channel number, respectively) from a pre-trained teacher model as the surrogate for original training data x. We jointly update random noise vector z and the parameters θ_g of the generator 𝒢 to obtain x̃=𝒢(z) for n_g steps in each training round. The generator provides stronger regularization on pixels due to the shared parameters θ_g. Although the main purpose of our work is to synthesize hard data based on the current ability of the student itself, if we synthesize data only by the student, this may make the distribution of the synthetic data far away from the original training data due to the lack of data prior constraints. The optimization objective of data synthesis consists of two components and is formulated as: min_z,θ_gℒ_narrow-αℒ_csd, where ℒ_narrow aims to narrow the gap between the synthetic data and the original training data with the help of the well-trained teacher model for alleviating outliers, and ℒ_csd estimates the learning ability of the student. We will elaborate these two terms later. Narrowing the Distribution Gap. To make synthetic data more realistic, we adopt the following optimization objective to narrow the gap between the distribution of synthetic data and original training data: ℒ_narrow = ℒ_cls + ℒ_bns, ℒ_cls represents an one-hot assumption that if the synthetic data have the same distribution as that of the original training data, the prediction of the synthetic data by the teacher model would be like a one-hot vector <cit.>. Therefore, ℒ_cls is calculated as the cross entropy between the teacher prediction 𝒯(x̃) and the pre-defined label ỹ: ℒ_cls=CrossEntropy(ỹ, 𝒯(x̃)), ℒ_bns is a constraint that effectively utilizes statistics stored in the batch normalization (BN) layers of the teacher as data prior information <cit.>. It employs running mean μ_l and running variance σ_l^2 of the l-th BN layer as feature statistics of original training data. ℒ_bns is then calculated as the l2-norm distance between features statistics of synthetic data x̃ and original training data: ℒ_bns=∑_l(‖μ̃_l(x̃)-μ_l‖_2+‖σ̃_l^2(x̃)-σ_l^2‖_2), where μ̃_l(x̃) and σ̃_l^2(x̃) are mean and variance of the feature maps at the l-th teacher layer, respectively. Customizing Synthetic Data for the Student. In each training round, it is necessary to synthesize data adaptively according to the current student learning ability, so as to prevent the student from repeatedly learning oversimple samples. To quantify learning ability, we consider that if a model can understand the semantic information of a image well, it would have a strong learning ability. Specifically, we adopt a simple self-supervised task by first rotating each image at different angles and then forcing the model to identify which angle each image comes from. As illustrated in <cit.>, the model can effectively perform the rotation recognition task unless it first learns to recognize the object categories and then recognize semantic parts in the image. But only using the rotation task to estimate learning ability is not enough. For example,“6” is rotated 180^∘ for the digit “9” and 0^∘ for the digit “6”. Inspired by <cit.>, we also combine the original classification task and the self-supervised rotation task into a unified task, named as the self-supervised augmented task, which forces the model to identify the angle as well as the category to eliminating incorrect estimation. We consider a N-way classification task and a M-way self-supervised rotation task. The CNN student model consists of two components: the feature extractor Φ:x̃→ℝ^d and the classifier h:ℝ^d→ℝ^N, i.e., 𝒮(x̃)=h(Φ(x̃)). Here d denotes the feature dimension. we attach an auxiliary classifier c:ℝ^d→ℝ^K with parameters θ_c behind the feature extractor, where K=N*M represents the number of categories for the self-supervised augmented task. ℒ_csd is calculated as follows: ℒ_csd = CrossEntropy(k, c(Φ(trans(x̃)))), where trans(·) is the operation of rotation and k is the label of the rotated version of synthetic data x̃ in the self-supervised augmented task. For example, if the category of x̃ in the original classification task is n and the category of its rotated version in the self-supervised rotation task is m, then the category in the self-supervised augmented task is n*M+m. By enlarging ℒ_csd, we generate hard samples on which the student has difficulty understanding semantics. §.§ Knowledge Transfer In knowledge transfer stage, the main purpose is to encourage the student model to mimic behaviors of the teacher model. The vanilla KD <cit.> matches final prediction distribution of the teacher and student model by calculating the Kullback-Leibler (KL) divergence between outputs of the teacher and the student: ℒ_kd = KL(σ(𝒯(x̃)/τ), σ(𝒮(x̃)/τ)), where σ(·) is the softmax function and τ is a hyper-parameter to soften the distribution. We set τ to 20 throughout all experiments for fair comparison as CMI <cit.>. Besides prediction distribution, feature maps can also be used as valuable knowledge to effectively guide the student <cit.>. We define the Mean-Square-error (MSE) loss between teacher feature maps F_t∈ℝ^H_t*W_t*C_t and student feature maps F_s∈ℝ^H_s*W_s*C_s from the last layer as: ℒ_fea = MSE(F_t, r(F_s)), where r(·) is a projection to align the dimension of feature maps. The student is trained for n_s steps in each training round and optimized by: min_θ_sℒ_ce+ℒ_kd+β*ℒ_fea, where β is a hyper parameter to balance the three loss items, and ℒ_ce=CrossEntropy(ỹ,𝒮(x̃)) is a regular loss in the original classification task to calculate cross entropy between student outputs and pre-defined labels. Besides the student training, the auxiliary classifier is also separately trained with the following loss to improve its own evaluation capability to better help the data synthesis stage: min_θ_cℒ_csd. §.§ Training Procedure The two-stage training procedure is summarized in Algorithm <ref>. In the data synthesis stage, the random noise z and generator 𝒢 are first trained for n_g times. Then we append the new synthetic data into an image bank for preventing catastrophic forgetting <cit.>. In knowledge transfer stage, we sample data from the image bank and separately train the student 𝒮 and the auxiliary classifier c for n_s times. § EXPERIMENTS Datasets and models. We conduct experiments on SVHN <cit.>, CIFAR-10 and CIFAR-100 <cit.> datasets, following a similar training setting as <cit.>. For all datasets, various models are used, including ResNet <cit.>, WRN <cit.>, VGG <cit.> and MobileNet <cit.>. The generator architecture is the same as <cit.>. Training details. For all datasets, to prevent the student from overfitting to data generated by early training rounds <cit.>, we first synthesize some data to initialize the image bank by removing ℒ_csd and running 400 synthesis batches with each one containing 200 samples. We totally train 100 rounds (epochs). In data synthesis stage, the random noise vector and generator are updated using Adam optimizer with 1e-3 learning rate. We synthesize 200 images in each step and repeat for n_g=500 steps. The hyper-parameter α is set to 10. In knowledge transfer stage, the student and the auxiliary classifier are update using SGD optimizer with 0.1 learning rate, 0.9 momentum and 1e-4 weight decay and we adopt cosine annealing for the learning rate decay. we sample 128 images from the image bank in each step and repeat for n_s=2000 steps. The hyper-parameter β is set to 30. We set temperature τ to 20. Test accuracy is used to evaluate the proposed method. We run all experiments for three times and report the means. More implementation details and results can be found in the appendix. §.§ Comparison with DFKD methods We compare with four representative DFKD methods on five groups of teacher-student models, including three homogeneous and two heterogeneous architecture combinations. DAFL <cit.> and ZSKT <cit.> are generator-based methods. ADI <cit.> and CMI <cit.> are inversion-based methods. Table <ref> shows that our proposed CSD outperforms all other methods. We also observe that, except for CMI, other comparison methods perform poorly on heterogeneous combinations and more complex datasets. For example, in the case of “WRN-40-2 & VGG8" on CIFAR-100, the test accuracy of DFAL is only 25.24%, which do not even achieve half accuracy of the student trained on the original data (68.76%). In contrast, our proposed CSD is robust on different datasets and teacher-student combinations. §.§ Effect of Our Proposed Adversarial Loss We conduct ablation study on CIFAR-10 and CIAFR-100 to explore whether our proposed adversarial loss L_csd can help improve the student performance. As shown in Table <ref>, in the case of Baseline, i.e., removing the adversarial loss (Equation <ref>), the accuracy drops by 3.62% on CIFAR-10 (from 90.50% to 86.88%) and 3.29% on CIFAR-100 (from 60.88% to 57.59%), which demonstrates the effectiveness of our proposed ℒ_csd. To further demonstrate the superiority of our method, we compare with two alternative adversarial strategies. The first one is traditional adversarial manner as the previous work <cit.>, whose adversarial loss is to calculate the divergence between predictions of the teacher and student. We replace ℒ_csd with traditional adversarial loss L_adv = KL(σ(𝒯(x̃)/τ), σ(𝒮(x̃)/τ)) and find that it has a slight improvement of 0.65% (from 86.88% to 87.57%) compared to Baseline on CIFAR-10. Surprisingly, We observe that it even results in a large drop of 4.09% (from 57.59% to 53.5%) on the more complex CIFAR-100 dataset. This indicates that estimating the sample difficulty with teacher predictions is likely to be unreliable, which would enlarge the negative effect in the case of teacher misdirection and thus weakens the student performance. Additionally, we plot the learning curves of the student trained by different strategies. In Fig. <ref>, it is clear that ℒ_adv causes very large accuracy fluctuations across training rounds (epochs), while our CSD makes the model converge faster and more stable. The second alternative strategy is to use only the rotation task as the final task to quantify the student learning ability without containing the original classification task. So we replace ℒ_csd with self-supervised rotation loss ℒ_rotation = CrossEntropy(m,c(Φ(trans(x̃)))), where m is the label of synthetic data in the rotation task. From Table <ref>, this causes significantly performance improvement on both CIFAR-10 and CIFAR-100 compared to the traditional adversarial manner, which shows the superiority of synthesizing hard samples according to the current student learning ability. However, only rotation task may destroy the original visual semantic information on some samples (such as “6” vs “9”) and results in inaccurate ability estimation. By combining the original classification task and the self-supervised rotation task, our CSD further improves the model performance. §.§ Auxiliary Classifier Analysis Next, we explore how the structure and training strategy of the auxiliary classifier affect the final student performance. To study the effect of the auxiliary classifier structure, we attach different numbers of fully-connected layers (from 1 to 3) behind the feature extractor. In Fig. <ref>, only one fully-connected layer even has a negative impact, which reduces the student performance on CIFAR-10 and CIFAR-100 by about 3% and 5% compared to the Baseline (without ℒ_csd), while two or three fully-connected layers can achieve similarly superior performance. We conjecture that multiple layers can effectively filter out noise in feature representations to accurately estimate the student ability. Therefore, we adopt two fully-connected layers as the auxiliary classifier for all experiments to trade off between the effectiveness and complexity. To study the effect of the training strategy during the knowledge transfer stage, we conduct experiments with two different training strategies: joint training and separate training. (1) Joint training updates the parameters of the student and the auxiliary classifier simultaneously at each step, that is, change the lines 17 and 18 of the Algorithm <ref> to θ_s←θ_s-ξ∇_s(ℒ_KT+ℒ_csd) and θ_c←θ_c-ξ∇_c(ℒ_KT+ℒ_csd). This strategy requires the student to learn the self-supervised augmented task together with the original classification task. (2) Separate training is exactly our adopted strategy for CSD. At each step, we update the student parameters first and then fix it and turn to train the auxiliary classifier. Table <ref> demonstrates separate training performs better. We conjecture that the additional self-supervised auxiliary task might distract the student from the main classification task. § CONCLUSION In data-free knowledge distillation, the student model itself can act as a key contributor to synthesize more valuable data while this point is largely overlook previously. In this paper, we utilize a self-supervised augmented task to accurately estimate the current student learning ability in each training round to synthesize more valuable data rather than oversimple synthetic data. Extensive experiments are conducted on three popular datasets and various groups of teacher-student models to evaluate the performance of our proposed method, and the results demonstrates the effectiveness of our proposed CSD. A potential future work is to explore how to apply the popular diffusion models to synthetic samples for data-free knowledge distillation <cit.>. § APPENDIX §.§ Experimental Details §.§.§ Datasets We evaluate our proposed CSD on three public datasets for classification task: SVHN, CIFAR-10 and CIFAR-100. The details of these datasets are listed as follows: * SVHN <cit.>. SVHN is a dataset of street view house numbers collected by Google, and the size of each image is 32×32. It consists of over 600,000 labeled images, including 73257 training images, 26,032 testing images and 531,131 additional training images. * CIFAR-10 <cit.>. CIFAR-10 is a dataset of 32×32 colored images. It consists of 60,000 labeled images from 10 categories. Each category contains 6,000 images, which are divided into 5,000 and 1,000 for training and testing, respectively. * CIFAR-100 <cit.>. CIFAR-100 is similar but more challenging to CIFAR-10, which consists of 100 categories. Each categories contains 500 training images and 100 testing images. Note that the training set is only utilized for teacher training and is unseen for data-free knowledge distillation. However, the testing set is still used for assessment. §.§.§ Model Architectures For all datasets, three network types are used in teacher-student models: ResNet <cit.> ,WRN <cit.>, VGG <cit.> and MobileNet-V2 <cit.>. The number behind “VGG" and “ResNet" denotes the depth of the network. “WRN-n-k" denotes a residual network with n depths and widening factor k. We use the same generator architecture as the previous work <cit.>, which is detailed in Table <ref>. We set the dimension of random noise vector to 256. §.§.§ Baseline We compare with four representative data-free knowledge distillation methods: two generator-based methods (DSFL and ZSKT) and two inversion-based methods (ADI and CMI). The details of these compared methods are listed as follows: * DAFL <cit.>. DFAL is a generator-based DFKD method that introduces one-hot loss, activation loss and information entropy loss from the teacher feedback as constraints to generate data close to the original training data. * ZSKT <cit.>. ZSKT is another generator-based DFKD method that first introduces adversarial distillation. It generate hard samples on which the student poorly matches the teacher, i.e., maximizing the KL divergence between their predictions, and then use these hard samples to minimize the KL divergence in order to train the student. * ADI <cit.>. ADI is an inversion-based DFKD method that first proposes to utilize statistics stored in batch normalization layers of the teacher as image prior information. * CMI <cit.>. CMI is another inversion-based DFKD method that mainly addresses model collapse issue. It introduces a contrastive learning objective to encourage each sample to distinguish itself from others for sample diversity. §.§ Visualization We visualize synthetic images of our CSD from different training epochs in Figure <ref>. We observe that images from early training epoch are more visually discernible than images from later training epoch, which indicates that as the number of training epochs increases, the student learning ability gradually becomes stronger, leading to more difficult synthetic images. Additionally, we plot the learning curves of the auxiliary classifier during knowledge transfer in Fig. <ref>. §.§ Sensitivity Analysis To study how the hyper-parameter α affect the student final performance, we plot student accuracy curves on CIFAR-100 for WRN-40-2 & WRN-16-1 with α ranging from 2 to 20 at equal interval of 2. From Fig. <ref>, we find that our CSD outperforms the best competitor (CMI) on all values of α. §.§ RELATED WORK §.§.§ Data-Driven Knowledge Distillation Knowledge distillation (KD) is proposed to solve model compression problem by distilling knowledge from a cumbersome model (teacher) into a less-parameterized model (student). The vanilla KD <cit.> takes predictions from the last layer as the teacher knowledge to guide the student training. Besides predictions, many subsequent works excavate the knowledge in the output of intermediate layers to supervise the training of the student. The intermediate supervision can be formed by feature maps <cit.>, attention maps <cit.> or feature representation <cit.>. There are also some works for transferring knowledge in relationships between different samples or layers <cit.>. All the above mentioned methods are based on the premise that the original training data is available, while our proposed method is discussed in a more challenging scenario of no original data. §.§ Data-Free Knowledge Distillation Data-free knowledge distillation (DFKD) deals with transferring knowledge without the access to the original training data. A straightforward idea is to synthesize the original data for knowledge transfer. The approaches of data synthesis can be roughly categorized into two classes: inversion-based and generator-based approaches. Inversion-based approaches input the random Gaussian noise into the fixed teacher and update the input iteratively via the back-propogation until meeting certain constraints <cit.>. ADI <cit.> proposes to leverage information stored in the batch normalization layers of the teacher to narrow gap between synthetic data and original data. CMI <cit.> introduces contrastive learning objective to address the mode collapse issue and thus ensure sample diversity. FastDFKD <cit.> introduces a meta-synthesizer to accelerate data synthesis process and achieves 100× faster speed. Generator-based approaches adopt a learnable generator to synthesize data <cit.>. DAFL <cit.> introduce one-hot loss, activation loss and information entropy loss as the objective of synthesizing data, which are calculated according to the teacher output. PRE-DFKD <cit.> designs a Variational Autoencoder (VAE) to replay synthetic samples for preventing catastrophic forgetting without storing any data. Adversarial Distillation <cit.> focus on synthesizing hard data by enlarging the divergence between predictions of the teacher and the student, so as to narrow the information gap between the teacher and the student. However, all above methods do not properly take into account the student's current ability during data synthesis, which may lead to oversimple samples and thus limit the final student performance. IEEEbib
http://arxiv.org/abs/2307.03949v1
20230708103948
Ergodic observables in non-ergodic systems: the example of the harmonic chain
[ "Marco Baldovin", "Raffaele Marino", "Angelo Vulpiani" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
Institute for Complex Systems - CNR, P.le Aldo Moro 2, 00185, Rome, Italy Université Paris-Saclay, CNRS, LPTMS,530 Rue André Rivière, 91405, Orsay, France Dipartimento di Fisica e Astronomia, Universitá degli Studi di Firenze, Via Giovanni Sansone 1, 50019, Sesto Fiorentino, Italy Dipartimento di Fisica, Sapienza Universitá di Roma, P.le Aldo Moro 5, 00185, Rome, Italy In the framework of statistical mechanics the properties of macroscopic systems are deduced starting from the laws of their microscopic dynamics. One of the key assumptions in this procedure is the ergodic property, namely the equivalence between time averages and ensemble averages. This property can be proved only for a limited number of systems; however, as proved by Khinchin <cit.>, weak forms of it hold even in systems that are not ergodic at the microscopic scale, provided that extensive observables are considered. Here we show in a pedagogical way the validity of the ergodic hypothesis, at a practical level, in the paradigmatic case of a chain of harmonic oscillators. By using analytical results and numerical computations, we provide evidence that this non-chaotic integrable system shows ergodic behavior in the limit of many degrees of freedom. In particular, the Maxwell-Boltzmann distribution turns out to fairly describe the statistics of the single particle velocity. A study of the typical time-scales for relaxation is also provided. Ergodic observables in non-ergodic systems: the example of the harmonic chain Angelo Vulpiani August 12, 2023 ============================================================================== § INTRODUCTION Since the seminal works by Maxwell, Boltzmann and Gibbs, statistical mechanics has been conceived as a link between the microscopic world of atoms and molecules and the macroscopic one where everyday phenomena are observed <cit.>. The same physical system can be described, in the former, by an enormous number of degrees of freedom N (of the same order of the Avogadro number) or, in the latter, in terms of just a few thermodynamics quantities. Statistical mechanics is able to describe in a precise way the behavior of these macroscopic observables, by exploiting the knowledge of the laws for the microscopic dynamics and classical results from probability theory. Paradigmatic examples of this success are, for instance, the possibility to describe the probability distribution of the single-particle velocity in an ideal gas <cit.>, as well as the detailed behavior of phase transitions <cit.> and critical phenomena <cit.>. In some cases (Bose-Einstein condensation <cit.>, absolute negative temperature systems <cit.>) the results of statistical mechanics were able to predict states of the matter that were never been observed before. In spite of the above achievements, a complete consensus about the actual reasons for such a success has not been yet reached within the statistical mechanics community. The main source of disagreement is the so-called “ergodic hypothesis”, stating that time averages (the ones actually measured in physics experiments) can be computed as ensemble averages (the ones appearing in statistical mechanics calculations). Specifically, a system is called ergodic when the value of the time average of any observable is the same as the one obtained by taking the average over the energy surface, using the microcanonical distribution <cit.>. It is worth mentioning that, from a mathematical point of view, ergodicity holds only for a small amount of physical systems: the KAM theorem <cit.> establishes that, strictly speaking, non-trivial dynamics cannot be ergodic. Nonetheless, the ergodic hypothesis happens to work extremely well also for non-ergodic systems. It provides results in perfect agreement with the numerical and experimental observations, as seen in a wealth of physical situations <cit.>. Different explanations for this behavior have been provided. Without going into the details of the controversy, three main points of view can be identified: (i) the “classical” school based on the seminal works by Boltzmann and the important contribution of Khinchin, where the main role is played by the presence of many degrees of freedom in the considered systems  <cit.>; (ii) those, like the Prigogine school, who recognize in the chaotic nature of the microscopic evolution the dominant ingredient <cit.>; (iii) the maximum entropy point of view, which does not consider statistical mechanics as a physical theory but as an inference methodology based on incomplete information <cit.>. The main aim of the present contribution is to clarify, at a pedagogical level, how ergodicity manifests itself for some relevant degrees of freedom, in non-ergodic systems. We say that ergodicity occurs “at a practical level”. To this end, a classical chain of N coupled harmonic oscillators turns out to be an excellent case study: being an integrable system, it cannot be suspected of being chaotic; still, “practical” ergodicity is recovered for relevant observables, in the limit of N≫1. We believe that this kind of analysis supports the traditional point of view of Boltzmann, which identifies the large number of degrees of freedom as the reason for the occurrence of ergodic behavior for physically relevant observables. Of course, these conclusions are not new. In the works of Khinchin (and then Mazur and van der Lynden) <cit.> it is rigorously shown that the ergodic hypothesis holds for observables that are computed as an average over a finite fraction of the degrees of freedom, in the limit of N ≫ 1. Specifically, if we limit our interest to this particular (but non-trivial) class of observables, the ergodic hypothesis holds for almost all initial conditions (but for a set whose probability goes to zero for N →∞), within arbitrary accuracy. In addition, several numerical results for weakly non-linear systems  <cit.>, as well as integrable systems <cit.>, present strong indications of the poor role of chaotic behaviour, implying the dominant relevance of the many degrees of freedom. Still, we think it may be useful, at least from a pedagogical point of view, to analyze an explicit example where analytical calculations can be made (to some extent), without losing physical intuition about the model. The rest of this paper is organized as follows. In Section <ref> we briefly recall basic facts about the chosen model, to fix the notation and introduce some formulae that will be useful in the following. Section <ref> contains the main result of the paper. We present an explicit calculation of the empirical distribution of the single-particle momentum, given a system starting from out-of-equilibrium initial conditions. We show that in this case the Maxwell-Boltzmann distribution is an excellent approximation in the N→∞ limit. Section <ref> is devoted to an analysis of the typical times at which the described ergodic behavior is expected to be observed; a comparison with a noisy version of the model (which is ergodic by definition) is also provided. In Section <ref> we draw our final considerations. § MODEL We are interested in the dynamics of a one-dimensional chain of N classical harmonic oscillators of mass m. The state of the system is described by the canonical coordinates {q_j(t), p_j(t)} with j=1,..,N; here p_j(t) identifies the momentum of the j-th oscillator at time t, while q_j(t) represents its position. The j-th and the (j+1)-th particles of the chain interact through a linear force of intensity κ|q_j+1-q_j|, where κ is the elastic constant. We will assume that the first and the last oscillator of the chain are coupled to virtual particles at rest, with infinite inertia (the walls), i.e. q_0≡ q_N+1≡ 0. The Hamiltonian of the model reads therefore ℋ(𝐪,𝐩)=∑_j=0^N p_j^2/2 m + ∑_j=0^Nm ω_0^2 /2(q_j+1 - q_j)^2, where ω_0=√(κ/m). Such a system is integrable and, therefore, trivially non-ergodic. This can be easily seen by considering the normal modes of the chain, i.e. the set of canonical coordinates Q_k=√(2/N+1)∑_j=1^N q_j sinj k π/N+1 P_k=√(2/N+1)∑_j=1^N p_j sinj k π/N+1 , with k=1, ..., N. Indeed, by rewriting the Hamiltonian in terms of these new canonical coordinates one gets ℋ(𝐐,𝐏)=1/2∑_k=1^N P_k^2/m + ω_k^2 Q_k^2 , where the frequencies of the normal modes are given by ω_k=2 ω_0 sinπ k/2N +2 . In other words, the system can be mapped into a collection of independent harmonic oscillators with characteristic frequencies {ω_k}. This system is clearly non-ergodic, as it admits N integrals of motion, namely the energies E_k=1/2P_k^2/m + ω_k^2 Q_k^2 associated to the normal modes. In spite of its apparent simplicity, the above system allows the investigation of some nontrivial aspects of the ergodic hypothesis, and helps clarifying the physical meaning of this assumption. § ERGODIC BEHAVIOR OF THE MOMENTA In this section we analyze the statistics of the single-particle momenta of the chain. We aim to show that they approximately follow a Maxwell-Boltzmann distribution 𝒫_MB(p)=√(β/2π m)e^-β p^2/2m in the limit of large N, where β is the inverse temperature of the system. With the chosen initial conditions, β=N/E_tot. Firstly, extending some classical results by Kac <cit.>, we focus on the empirical distribution of the momentum of one particle, computed from a unique long trajectory, namely 𝒫_e^(j)p=1 T∫_0^T dt δp -p_j(t) . Then we consider the marginal probability distribution 𝒫_ep,t computed from the momenta {p_j} of all the particles at a specific time t, i.e. 𝒫_ep,t=1 N∑_j=1^N δp -p_j(t) . In both cases we assume that the system is prepared in an atypical initial condition. More precisely, we consider the case in which Q_j(0)=0, for all j, and the total energy E_tot, at time t=0, is equally distributed among the momenta of the first N^⋆ normal modes, with 1 ≪ N^⋆≪ N: P_j(0)= √(2m E_tot/N^⋆) for 1 ≤ j ≤ N^⋆ 0 for N^⋆< j ≤ N . In this case, the dynamics of the first N^⋆ normal modes is given by Q(t) =√(2 E_tot/ω_k^2N^⋆)sinω_k t P(t) =√(2 m E_tot/N^⋆)cosω_k t . §.§ Empirical distribution of single-particle momentum Our aim is to compute the empirical distribution of the momentum of a given particle p_j, i.e., the distribution of its values measured in time. This analytical calculation was carried out rigorously by Mazur and Montroll in Ref. <cit.>. Here, we provide an alternative argument that has the advantage of being more concise and intuitive, in contrast to the mathematical rigour of <cit.>. Our approach exploits the computation of the moments of the distribution; by showing that they are the same, in the limit of infinite measurement time, as those of a Gaussian, it is possible to conclude that the considered momentum follows the equilibrium Maxwell-Boltzmann distribution. The assumption N≫1 will enter explicitly the calculation. The momentum of the j-th particle can be written as a linear combination of the momenta of the normal modes by inverting Eq. (<ref>): p_j(t) =√(2/N+1)∑_k=1^N sinj k π/N+1 P_k(t) =2√(m E_tot/(N+1)N^⋆)∑_k=1^N^⋆sinkjπ/N+1cosω_k t where the ω_k's are defined by Eq. (<ref>), and the dynamics (<ref>) has been taken into account. The n-th empirical moment of the distribution is defined as the average p_j^n of the n-th powerof p_j over a measurement time T: p_j^n =1/T∫_0^Tdt p_j^n(t) =1/T∫_0^Tdt (C_N^⋆)^n ∏_l=1^n∑_k_l=1^N^⋆sink_l jπ/N+1cosω_k_l t =(C_N^⋆)^n ∑_k_1=1^N^⋆…∑_k_n=1^N^⋆sink_1jπ/N+1 …sink_njπ/N+1 1/T∫_0^Tdt cosω_k_1 t…cosω_k_n t with C_N^⋆=2√(m E_tot/(N+1)N^⋆) . We want to study the integral appearing in the last term of the above equation. To this end it is useful to recall that 1/2 π∫_0^2πd θcos^n(θ)= (n-1)!!/n!! for n even 0 for n odd . As a consequence, one has 1/T∫_0^Td t cos^n(ω t)≃(n-1)!!/n!! for n even 0 for n odd . Indeed, we are just averaging over ≃ω T/2 π periods of the integrated function, obtaining the same result we get for a single period, with a correction of the order O(ω T)^-1. This correction comes from the fact that T is not, in general, an exact multiple of 2 π/ω. If ω_1, ω_2, ..., ω_q are incommensurable (i.e., their ratios cannot be expressed as rational numbers), provided that T is much larger than (ω_j-ω_k)^-1 for each choice of 1 ≤ k < j ≤ q, a well known result <cit.> assures that 1/T∫_0^Td t cos^n_1(ω_1 t)·...·cos^n_q(ω_q t) ≃ 1/T∫_0^Td t cos^n_1(ω_1 t)·...·1/T∫_0^Td t cos^n_q(ω_1 t) ≃ (n_1-1)!!/n_1!!· ...·(n_q-1)!!/n_q!! if all n's are even , where the last step is a consequence of Eq. (<ref>). Instead, if at least one of the n's is odd, the above quantity vanishes, again with corrections due to the finite time T. Since the smallest sfrequency is ω_1, one has that the error is at most of the order Oq(ω_1 T)^-1≃ O(qN /ω_0 T). Let us consider again the integral in the last term of Eq. (<ref>). The ω_k's are, in general, incommensurable. Therefore, the integral vanishes when n is odd, since in that case at least one of the {n_l}, l=1,...,q, will be odd. When n is even, the considered quantity is different from zero as soon as the k's are pairwise equal, so that n_1=...=n_q=2. In the following we will neglect the contribution of terms containing groups of four or more equal k's: if n≪ N^⋆, the number of these terms is indeed ∼ O(N^⋆) times less numerous than the pairings, and it can be neglected if N^⋆≫1 (which is one of our assumptions on the initial condition). Calling Ω_n the set of possible pairings for the vector 𝐤=(k_1,...,k_l), we have then p_j^n≃C_N^⋆/√(2)^n ∑_𝐤∈Ω_n∏_l=1^n sink_ljπ/N+1 , with an error of O(1/N^⋆) due to neglecting groups of 4, 6 and so on, and an error O(nN/ω_0 T) due to the finite averaging time T, as discussed before. Factor 2^-n/2 comes from the explicit evaluation of Eq. (<ref>) . At fixed j, we need now to estimate the sums appearing in the above equation, recalling that the k's are pairwise equal. If j> N/N^⋆, the arguments of the periodic functions can be thought as if independently extracted from a uniform distribution 𝒫(k)=1/N^⋆. One has: sin^2 kj π/N+1≃∑_k=1^N^⋆1/N^⋆sin^2 kj π/N+1≃1/2 π∫_-π^πd θ sin^2(θ)=1/2 , and ∏_l=1^n sink_ljπ/N+1≃ 2^-n/2 , if 𝐤∈Ω_n. As a consequence p_j^n ≃C_N^⋆/2^n (N^⋆)^n/2 𝒩(Ω_n)≃m E_tot/N+1^n/2𝒩(Ω_n) , where 𝒩(Ω_n) is the number of ways in which we can choose the pairings. These are the moments of a Gaussian distribution with zero average and m E_tot/N+1 variance. Summarising, it is possible to show that, if n ≪ N^⋆≪ N, the first n moments of the distribution are those of a Maxwell-Boltzmann distribution. In the limit of N≫1 with N^⋆/N fixed, the Gaussian distribution is thus recovered up to an arbitrary number of moments. Let us note that the assumption Q_j(0)=0, while allowing to make the calculations clearer, is not really relevant. Indeed, if Q_j(0)≠ 0 we can repeat the above computation while replacing ω_k t by ω_k t + ϕ_k, where the phases ϕ_k take into account the initial conditions. Fig. <ref> shows the standardized histogram of the relative frequencies of single-particle velocities of the considered system, in the N ≫ 1 limit, with the initial conditions discussed before. As expected, the shape of the distribution tends to a Gaussian in the large-time limit. §.§ Distribution of momenta at a given time A similar strategy can be used to show that, at any given time t large enough, the histogram of the momenta is well approximated by a Gaussian distribution. Again, the large number of degrees of freedom plays an important role. We want to compute the empirical moments p^n(t)=1/N∑_j=1^N p_j^n(t) , defined according to the distribution 𝒫_e^(j)p introduced by Eq. (<ref>). Using again Eq. (<ref>) we get p^n(t)= 1/N∑_j=1^N(C_N^⋆)^n∑_k=1^N^⋆sinkjπ/N+1cosω_k t^n = 1/N(C_N^⋆)^n∑_k_1^N^⋆…∑_k_n=1^N^⋆∏_l=1^Ncosω_k_lt∑_j=1^Nsink_1 j π/N+1…sink_n j π/N+1 . Reasoning as before, we see that the sum over j vanishes in the large N limit unless the k's are pairwise equal. Again, we neglect the terms including groups of 4 or more equal k's, assuming that n≪ N^⋆, so that their relative contribution is O(1/N^⋆). That sum selects paired values of k for the product inside the square brackets, and we end with p^n(t)≃1/N(C_N^⋆)^n∑_𝐤∈Ω_n∏_l=1^Ncosω_k_lt . If t is “large enough” (we will come back to this point in the following section), different values of ω_k_l lead to completely uncorrelated values of cos(ω_k_l t). Hence, as before, we can consider the arguments of the cosines as extracted from a uniform distribution, obtaining p^n(t)≃C_N^⋆/2^n (N^⋆)^n/2 𝒩(Ω_n)≃m E_tot/N+1^n/2𝒩(Ω_n) . These are again the moments of the equilibrium Maxwell-Boltzmann distribution. We had to assume n ≪ N^⋆, meaning that a Gaussian distribution is recovered only in the limit of large number of degrees of freedom. The empirical distribution can be compared with the Maxwell-Boltzmann by looking at the Kullback-Leibler divergence K(𝒫_e(p,t), 𝒫_MB(p)) which provides a sort of distance between the empirical 𝒫_e(p,t) and the Maxwell-Boltzmann: K[𝒫_e(p,t), 𝒫_MB(p)]= - ∫𝒫_e(p,t) ln𝒫_MB(p)/𝒫_e(p,t) dp. Figure <ref> shows how the Kullback-Leibler divergences approach their equilibrium limit, for different values of N. As expected, the transition happens on a time scale that depends linearly on N. A comment is in order: even if this behaviour may look similar to the H-Theorem for diluited gases, such a resemblance is only superficial. Indeed, while in the cases of diluited gases the approach to the Maxwell-Boltzmann is due to the collisions among different particles that actually exchange energy and momentum, in the considered case the “thermalization” is due to a dephasing mechanism. § ANALYSIS OF THE TIME SCALES In the previous section, when considering the distribution of the momenta at a given time, we had to assume that t was “large enough” in order for our approximations to hold. In particular we required cos(ω_k_1t) and cos(ω_k_2t) to be uncorrelated as soon as k_1 k_2. Such a dephasing hypothesis amounts to asking that |ω_k_1t-ω_k_2t|> 2π c , where c is the number of phases by which the two oscillator have to differ before they can be considered uncorrelated. The constant c may be much larger than 1, but it is not expected to depend strongly on the size N of the system. In other words, we require t> c/|ω_k_1-ω_k_2| for each choice of k_1 and k_2. To estimate this typical relaxation time, we need to pick the minimum value of |ω_k_1-ω_k_2| among the possible pairs (k_1,k_2). This term is minimized when k_1=k̃ and k_2=k̃-1 (or vice-versa), with k̃ chosen such that ω_k̃-ω_k̃-1 is minimum. In the large-N limit this quantity is approximated by ω_k̃-ω_k̃-1=ω_0sink̃π/2N+2-ω_0sink̃π- π/2N+2≃ω_0cosk̃π/2N+2π/2N+2 , which is minimum when k̃ is maximum, i.e. for k̃=N^⋆. Dephasing is thus expected to occur at t> 4cN/ω_0cosN^⋆π/2N , i.e. t>4cN/ω_0 in the N^⋆/N ≪ 1 limit. It is instructive to compare this characteristic time with the typical relaxation time of the “damped” version of the considered system. For doing so, we assume that our chain of oscillators is now in contact with a viscous medium which acts at the same time as a thermal bath and as a source of viscous friction. By considering the (stochastic) effect of the medium, one gets the Klein-Kramers stochastic process <cit.> ∂ q_j/∂ t=p_j/m ∂ p_j/∂ t=ω_0^2(q_j+1 - 2 q_j + q_j-1) -γ p_j + √(2 γ T)ξ_j where γ is the damping coefficient and T is the temperature of the thermal bath (we are taking the Boltzmann constant k_B equal to 1). Here the {ξ_j} are time-dependent, delta-correlated Gaussian noises such that ξ_j(t)ξ_k(t')=δ_jkδ(t-t'). Such a system is surely ergodic and the stationary probability distribution is the familiar equilibrium one 𝒫_s(𝐪,𝐩) ∝ e^-H(𝐪,𝐩)/T. Also in this case we can consider the evolution of the normal modes. By taking into account Eqs. (<ref>) and (<ref>) one gets Q̇_̇k̇ =1/m P_k Ṗ_̇k̇ =- ω_k^2 Q_k - γ/m P + √(2 γ T)ζ_k where the {ζ_k} are again delta-correlated Gaussian noises. It is important to notice that also in this case the motion of the modes is independent (i.e. the friction does not couple normal modes with different k); nonetheless, the system is ergodic, because the presence of the noise allows it to explore, in principle, any point of the phase-space. The Fokker-Planck equation for the evolution of the probability density function 𝒫Q_k,P_k,t of the k-th normal mode can be derived using standard methods <cit.>: ∂_t𝒫=-∂_Q_kP_k𝒫+∂_P_kω_k^ 2Q_k𝒫+γ/mP_k𝒫+γ T∂_P_k^2 𝒫 . The above equation allows to compute also the time dependence of the correlation functions of the system in the stationary state. In particular one gets d/dtQ_k(t) Q_k(0)=1/mP_k(t)Q_k(0) and d/dtP_k(t) Q_k(0)-ω_k^2 m Q_k(t) Q_k(0) -γ/mP_k(t) Q_k(0) , which, once combined together, lead to d^2/d t^2Q_k(t) Q_k(0)+γ/md/dtQ_k(t) Q_k(0)+ ω_k^2Q_k(t) Q_k(0)=0 . For ω_k <γ/m the solution of this equation admits two characteristic frequencies ω̃_±, namely ω̃_±=γ/2m1 ±√(1-m^2 ω_k^2/γ^2). In the limit ω_k ≪γ/m one has therefore ω̃_- ≃m/4 γω_k^2 ≃m ω_0^2 π^2 k^2/γ N^2 . Therefore, as a matter of fact, even in the damped case the system needs a time that scales as N^2 in order to get complete relaxation for the modes. As we discussed before, the dephasing mechanism that guarantees for “practical” ergodicity in the deterministic version is instead expected to occur on time scales of order O(N). § CONCLUSIONS The main aim of this paper was to expose, at a pedagogical level, some aspects of the foundation of statistical mechanics, namely the role of ergodicity for the validity of the statistical approach to the study of complex systems. We analyzed a chain of classical harmonic oscillators (i.e. a paradigmatic example of integrable system, which cannot be suspected to show chaotic behaviour). By extending some well-known results by Kac <cit.>, we showed that the Maxwell-Bolzmann distribution approximates with arbitrary precision (in the limit of large number of degrees of freedom) the empirical distribution of the momenta of the system, after a dephasing time which scales with the size of the chain. This is true also for quite pathological initial conditions, where only a small fraction of the normal modes is excited at time t=0. The scaling of the typical dephasing time with the number of oscillators N may appear as a limit of our argument, since this time will diverge in the thermodynamic limit; on the other hand one should consider, as explicitely shown before, that the damped version of this model (which is ergodic by definition) needs times of the order O(N^2) to reach thermalization for each normal mode. This comparison clearly shows that the effective thermalization observed in large systems has little to do with the mathematical concept of ergodicity, and it is instead related to the large number of components concurring to define the global observales that are usually taken into account (in our case, the large number of normal modes that define the momentum of a single particle). When these components cease to be in phase, the predictions of statistical mechanics start to be effective; this can be observed even in integrable systems, without need for the mathematical notion of ergodicity to hold. In other words, we believe that the present work give further evidence of the idea (which had been substantiated mathematically by Khinchin, Mazur and van der Linden) that the most relevant ingredient of statistical mechanics is the large number of degrees of freedom, and the global nature of the observables that are typically taken into account. § ACKNOWLEDGEMENTS RM is supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) "A Multiscale integrated approach to the study of the nervous system in health and disease" (DN. 1553 11.10.2022).
http://arxiv.org/abs/2307.03984v1
20230708141612
Optimizing Task Waiting Times in Dynamic Vehicle Routing
[ "Alexander Botros", "Barry Gilhuly", "Nils Wilde", "Armin Sadeghi", "Javier Alonso-Mora", "Stephen L. Smith" ]
cs.RO
[ "cs.RO", "cs.SY", "eess.SY", "68M20", "J.2" ]
font=small definition problemProblem theoremTheorem assumptionAssumption definitionDefinition propositionProposition observationObservation exampleExample *remark*Remark claimClaim corollaryCorollary lemmaLemma
http://arxiv.org/abs/2307.03988v1
20230708143306
PCG-based Static Underground Garage Scenario Generation
[ "Wenjin Li", "Kai Li" ]
cs.AI
[ "cs.AI", "cs.RO" ]
Journal of Class Files, Vol. Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals PCG-based Static Underground Garage Scenario Generation Wenjin Li, Kai Li Wenjin Li, Kai Li are with the Department of Computer Science and Technology, Southern University of Science and Technology, Shenzhen, 518055, China August 12, 2023 ============================================================================================================================================================================ Autonomous driving technology has five levels, from L0 to L5. Currently, only the L2 level (partial automation) can be achieved, and there is a long way to go before reaching the final level of L5 (full automation). The key to crossing these levels lies in training the autonomous driving model. However, relying solely on real-world road data to train the model is far from enough and consumes a great deal of resources. Although there are already examples of training autonomous driving models through simulators that simulate real-world scenarios, these scenarios require complete manual construction. Directly converting 3D scenes from road network formats will lack a large amount of detail and cannot be used as training sets. Underground parking garage static scenario simulation is regarded as a procedural content generation (PCG) problem. This paper will use the Sarsa algorithm to solve procedural content generation on underground garage structures. Automated driving, underground garage planning, reinforcement learning, procedural content generation, Sarsa § INTRODUCTION According to a recent technical report by the National Highway Traffic Safety Administration (NHTSA), 94% of road accidents are caused by human errors <cit.>. Against this backdrop, Automated Driving Systems (ADSs) are being developed with the promise of preventing accidents, reducing emissions, transporting the mobility-impaired, and reducing driving-related stress <cit.>. Autonomous driving simulation is an important part of ADSs. However, simulation lacks interactive and changeable scenarios <cit.>. Researchers are still using authentic human-made ways to build one scenario for huge training. Procedural Content Generation for Games (PCG-G) is the application of computers to generate game content, distinguish interesting instances among the ones generated, and select entertaining instances on behalf of the players <cit.>. In our project, we consider the underground garage as the game content that should be generated. The problem can normally be divided into three parts. The first part is to create the digit grid map for each type of floor, as a PCG task. The second part is to convert each type of floor to the design diagram. The last part is to simulate the whole 3D scenario map depending on the design diagram. To simplify the simulation, we combine the last two parts as one part. In reinforcement learning <cit.>, an agent seeks an optimal control policy for a sequential decision-making problem. We regard the first part as a sequential decision-making problem. Markov decision processes (MDPs) are effective models for solving sequential decision-making problems <cit.> in uncertain environments. The agent's policy can be represented as a mapping from each state it may encounter to a probability distribution over the available actions <cit.>. Generalized policy iteration (GPI) was demonstrated as a class of iterative algorithms for solving MDPs in <cit.>. It contains policy iteration (PI) and value iteration (VI) as special cases and has both advantages of PI and VI. Temperal-difference <cit.> is the specific implementation of GPI <cit.>. TD methods are guaranteed to converge in the limit to the optimal action-value function, from which an optimal policy can be easily derived. A classic TD method is Sarsa <cit.>. The on-policy algorithm, in which policy evaluation and policy improvement are identical, has important advantages. In particular, it has stronger convergence guarantees when combined with function approximation, since off-policy approaches can diverge in that case. In this paper, we use the Sarsa algorithm to create a digit grid map. Simulation is an important step during the conversion <cit.>. We consider the simulator can generate test scenarios automatically, including static buildings, dynamic traffic flow, and real-time calculated lighting and weather. This paper aims to solve the static scene generation problem. § RELATED WORK Abdullah <cit.> compared the space utilization efficiency of diagonal, parallel, and perpendicular parking methods and concluded that perpendicular parking methods have the highest number of spaces, using a university as a specific example. Sawangchote <cit.> developed a heuristic algorithm for the layout of parking spaces in small-scale garages based on the space utilization of different parking methods; Xu Hanzhe <cit.> carries out a parking space layout design based on a greedy algorithm to study the influence of irregular contours and obstacles on the layout of parking spaces and get the layout plan with the most number of parking spaces. Julian Togelius <cit.> finds that the result of the composition of a bunch of different algorithms is better than the result of any single algorithm and He used answer set programming to do procedure content generation. Huimin Wang <cit.> has previously proposed a model-based reinforcement learning algorithm to implement the path planning problem. The path planning problem has similar features when it applies to the specialized PCG problem. We consider that generation on a garage can use the method of path planning on agent moving. Besides, Arunpreet Sandhu <cit.> comes up with the WFC algorithm to generate similar images. Akatsu <cit.> provides an idea for evaluating underground garage structures by feeding a series of indicators obtained from a realistic traffic survey into a modeled underground garage structure to obtain a series of evaluation results. § METHODOLOGY §.§ Overall We consider dividing the underground garage construction into two main parts, PCG task and simulation. Notations using throughout this report are as follows: Since the most important thing in static underground garage scenario generation problems is the planning of parking stalls. For parking space planning problem, it is essentially an optimization problem of object placement, the objects to be placed will have the following distinction: * static object: object's position will not change after confirming the position * dynamic object: objects can wait for further optimization after confirming the position of static objects Now we only need to consider the dynamic object distribution, in order to better describe the entire underground garage object planning situation, here we rasterize the underground garage by using three matrices S_i,j, R_i,j, C_i,j to describe the state of an underground garage. In this paper, we will use reinforcement learning to plan the distribution of dynamic objects, by combining the distribution with the distribution of static objects to obtain the S_i,j as the result of parking space planning, and finally combine the R_i,j and C_i,j as the plane structure of the static underground garage to pass into the Unity3D engine for 3D modeling to finally generate the static underground garage scenario. We provide the following requirements for a reliable garage: * Reality: The generated basement structure needs to adapt to real-world standards (such as national standards and regulations) * Feasibility: Ensure that at least one route to any exit and entrance can be found for each parking space arranged in the basement structure * Randomness: The structure and contour of the basement are randomly generated, and the solution generated each time will change according to the change of the random process * Bijection: Each generated basement structure has a unique corresponding random process, and this random process must correspond to a unique basement structure * Customizability: The structure of the basement can be self-defined §.§ Static objects generation First, we give a definition of structure matrix 𝒮(i,j): 𝒮(i,j)={ 0 , parking space or free space -1 , obstacle 1 , lane 2 , entrance 3 , exit . At the beginning of getting this matrix, we should confirm the location of those static objects, which can be divided into three steps: contour generation, entrance and exit generation, and obstacle generation. First, we need to generate the contour of the underground garage. Divide a w× h rectangle into w× h blocks and each block has a width and height of 1. We consider generate n groups of 2n points in this rectangle and use the line of two points of each group as the diagonal of the rectangle to generate a rectangle and then after expand all rectangles to its corresponding squares, We will treat the concatenation of all rectangles as a generated underground garage contour. The following algorithm shows the generation of underground garage contour. After contour generation, we can get all squares in the floor plan, which mean we get ζ and ψ and then assign values to all those squares in ζ and ψ: 𝒮(ζ) = 0 𝒮(ψ) = -1 Secondly, we need to determine the position of the entrance and exit. After contour generation, in order to generate a reliable position of entrance and exit, we give a definition of ξ and η. A frontier square needs to satisfy the following conditions: 𝒮(ξ) = 0 ∑_i=1^8𝒮(ρ_ξ) < 0 An inner square needs to satisfy the following conditions: 𝒮(η) = 0 ∑_i=1^8𝒮(ρ_η) = 0 Since entrances and exits can only be generated in ξ and cannot be generated on the corners of ξ, in this condition, we only generate entrance and exit on those squares satisfy the following condition: ϵ∈ξ ∑_i=1^8𝒮(ρ_ϵ) = -3 M(ϵ_i,ϵ_j) ≥σ_1 Thirdly, we need to consider the position of obstacles in this underground garage. We only generate obstacles on those squares satisfying the following conditions: o ∈η M(o_i,o_j) ≥σ_2 §.§ Reinforcement Learning Reinforcement learning (RL) is a basis to solve our PCG problem. In this paper, we first focus on finite Markov decision processes (finite MDPs). A finite Markov decision process can be represented as a 4-tuple M = {S, A, P, R}, where S is a finite set of states; A is a finite set of actions; P : S× R × S × A → [0, 1] is the probability transition function; and R : S × A →ℛ is the reward function. In this paper, we denote the probability of the transition from state s to another state s' when taking action a by P(s', r|s, a) and the immediate reward received after the transition by r_s^a <cit.>. A policy is defined as a mapping, π: S× A→ [0,1]. In this paper, we use π(s) to represent the action a in state s under the policy π. To measure the quality of a policy, action-value function, q_π(s, a) is used to estimate the expected long-term cumulative reward of taking action a in state s under a policy π. It is formally defined as: q_π(s,a)=𝔼_π[∑_k=0^∞γ^kR_t+k+1| S_t=s, A_t=a] where γ is a discount factor, R_t is the reward at time-step t, and E_π is the expectation with respect to the policy π. The goal is to find an optimal policy π_* which maximizes the expectation of long-time discounted cumulative reward from any starting state s∈ S: π_*=*argmax_πE_π [∑_t=0^∞γ^t R_t|s_0=s] In this paper, we format PCG as an optimization problem <cit.>, which is represented as a 2-tuple (M, E ), where M is finite MDPs which can generate one 2D integer array and E is an evaluation function which evaluates the quality of array. We have one agent with policy π. It will tack action in state s and send a message to the environment. The environment receives the message and changes the state to the next state and sends rewards to the agent. Finally, the agent and environment produce a finite Markov decision array: S_0,A_0, R_1, S_1, A_1, R_2, S_2, A_2, R_3,…, S_T-1, A_T-1, R_T where T is the termination time. Evaluation function E is calculated from M E=∑_t=1^T-1 R_t R_T is always a negative value and it is not included in E. In other words, we come back to the previous unfailed state to compute E. Generalized policy iteration (GPI) contains two processes, policy evaluation (E) and policy improvement (I): π_0E→ q_π_0I→π_1E→ q_π_1I→π_2E→…I→π_*E→ q_* where q_π_i is action value function under π at episode i. The process is terminated when q and π converges to q_* and π_*. For Sarsa algorithm, policy evaluation and policy improvement are carried out simultaneously in each episode. The agent and environment in MDP are clear. Our design is divided into two sections. In the first section, we design the MDP for our PCG task. In the other section, we design the environment penalty based on the principle of parking lot design. §.§ Sarsa We use the Sarasa algorithm to solve the PCG task. First, we define the parameters of MDPs. We consider a car in a 2D place as an agent to perform a colouring task, which colours the undefined square to a lane spuare. Agent's state at timestamp t is defined as the multiple dimensional vectors: S_t=(D, M, A_t-1) Where D is a 4-dimensional vector that each element point to the distance between the free space, border, or obstacle and agent in the direction, M is a 25-dimensional vector that symbols to the perception range of the agent. It satisfies that all points have a Manhattan distance of less than 2 from the agent. The agent takes action from the action set A={UP, DOWN, LEFT, RIGHT, STAY} The goal is to colour the road as much as possible until it comes back to the start and takes action STAY, leading to a terminate state. Agent receives rewards depending on the increment of the number of parking spaces. The agent also receives a penalty for some wrong actions. To evaluate one policy π, we predict one Markov decision array containing S, A, R for each episode. We update q(S_t, A_t) during the prediction, following the function: q(S_t, A_t) = q(S_t, A_t) + α× (R_t+1 + γ× q(S_t+1, A_t+1)-q(S_t, A_t)) where α and γ are parameters, with 0≤α, γ≤ 1. We use greedy method to improve one policy: π(s)=*argmax_a q(s,a) where π(s) is the greedy action under policy π. We consider using ϵ-greedy to take action, where the agent has ϵ chance of taking greedy action with maximum value otherwise taking action equivalently. The probability of taking greedy action π(s) in state s is: p(s, π(s)) = (1-ϵ)+ϵ/|A| §.§ Penalty design The principle of parking lot design has been proposed for optimizing parking area space. * Use rectangular areas where possible * Make the long sides of the parking areas parallel * Design so that parking stalls are located along the lot's perimeter * Use traffic lanes that serve two rows of stalls <ref> conforms the above principle, where green square refers to lane square, orange square refers to parking square or free square, and white square refers to entrance or exit. Contrary to <ref>, <ref> has many problems: no cycle, existing non-rectangular and non-parallel areas, and many lanes serving only one row of the stall. The agent can not only receive a reward after the action but also a certain penalty we defined. The reasonable penalty guides agents to do actions they want. Based on the design principle, we propose several penalties below: * Turn-back penalty when the agent takes the opposite action from the last action. * Interval penalty based on the interval of the same actions. * Wheeling penalty at an improper position with a certain direction. * Step penalty for each timestamp to prevent agents from cycling consistently. §.§ Convert matrix to simulated underground garage After generating structure matrix 𝒮(i,j), we need to convert this matrix to a simulated underground garage. Here we first atomize the elements of the matrix, we define the below equation: n = ∑_i=1^4𝒮(θ_η) and for any square η, if: 𝒮(η) = 1 we define η as: η={ Crossroads , n = 4 T-Junctions , n = 3 Straight road , n ≤ 2 . and if: 𝒮(η) = 0 we define η as different types in Figure 2: η={ Type1 , n ≥ 3 or across n = 2 Type2 , adjacent n = 2, Type3 , n = 1 Type4 , n = 0 . Then, we only need to model each type of square η in the simulator and use scripts to construct the simulated underground garage. §.§ Construction of underground garage structure We know that autonomous vehicles typically use multiple types of sensors to collect and process environmental information to support the vehicle's decision-making and control systems <cit.>. The parking garage structure we generate is intended to provide training scenarios for autonomous vehicles, and the information collected during autonomous vehicle training comes from the simulated scenes, such as the lighting of light sources, the materials of various object surfaces, and information on the different light reflections of objects in the scene, and so on <cit.>. If we can better simulate the various objects in these scenes, the amount of information contained in the overall static parking garage scene will be greater, and it will better provide training data for autonomous vehicles, achieving better training effects. The construction details of a static underground parking garage mainly include object surface texture mapping, such as: * Lane marking texture mapping * Wall texture mapping * Floor texture mapping * Lighting texture mapping As well as collision bodies in the underground parking garage, such as: * Column mesh collision body * Speed bump collision body * Parking barrier And here we give the detailed procedure of underground garage generation in Unity3D: * The structure matrix 𝒮_(i,j) previously generated by using reinforcement learning is used as the generated underground structure, and the R_(i,j) and C_(i,j), which define the length and width of each plot of land in reality, are passed as input into Unity3D engine. * In the Unity engine, each different state of the land is first modeled, and then the entire underground plane is automatically generated based on the arrangement of elements in the specific structure matrix. * After generating the plane, three-dimensional information such as walls, pillars, ceilings, obstacles, etc. are further generated based on the outline of the underground structure. * According to the generated structure, more detailed descriptions are made, such as light tubes, ventilation ducts, and other underground details. * According to the demand, some objects that may appear underground, such as parked vehicles and no parking signs, are randomly generated. § EXPERIMENTAL SETUP §.§ Evaluation After generating the underground garage structure, we need to evaluate it, but there is no unified and credible standard for the evaluation function. So we proposed the following three dimensions to describe the value of the underground garage structure by combining the evaluation system of several papers: * the number of the parking spot * the average parking time * the number of unused squares So the evaluation function is like: y^' = k_1 * N_S + k_2 * T_S + k_3 * U_S To obtain the proportion of weights accounted for by each of these three criteria, here we assume that there exists a corresponding evaluation function for a certain underground garage structure, and the value distribution of all solutions for that structure is roughly Gaussian distributed. Based on this, we can know that if we have enough sampling points and judge the value size relationship of the structure in the sampling points, we can correspond these sampling points to the Gaussian distribution curve one by one, and then make the estimated value order of the sampling points the same as before by adjusting the weights of our evaluation function, so that we get an evaluation function with a certain degree of confidence, and when more and more points are sampled, the final evaluation function will be more credible. Here, we sampled a series of more representative experimental results and derived the above values for the three coefficients: y^' = N_S + (-5) * T_S + (-1) * U_S We conducted a 5000-episode cycle test for Sarsa algorithm with one garage contour. For each episode, we save the matrix and evaluation on it to the dictionary. In the end, we select top 200 matrix with high evaluation function value. §.§ Simulation of Underground Garage The main hardware devices used in the simulation to generate the underground garage scenario are: CPU: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz, GPU: NVIDIA GeForce GTX 1650 and the software are: Unity3D 2021.3.5f1c1, Visual Studio 2022 § RESULTS §.§ Sarsa Result <ref> indicate that the agent easily achieves the local limit at episode 400. Then it straight down to a small value. It maintains a trend of first converging to the limit and then sharply decreasing. It will keep searching for a solution if the test doesn't stop. However, we observed that as the number of episodes increases, there are instances where the agent obtains lower payoffs. This can be attributed to the ϵ-greedy strategy, which sometimes leads the agent directly to the termination state. To increase the converge rate, We make the ϵ decrease slowly. We also refresh the value of ϵ if the matrix keeps at 100 consecutive episodes. <ref> shows the matrix with the highest evaluation value during the test. It is slightly inferior to <ref> and <ref> manually constructed. §.§ Simulated Underground Garage <ref> shows the underground garage model simulated by modelling the structure matrix generated by the above reinforcement learning algorithm for 3000 iterations as input. <ref> shows the underground garage model simulated by modelling the structure matrix generated by the above reinforcement learning algorithm for 3000 iterations as input. § DISCUSSION For the evaluation function, there is no unified credible evaluation function, and the coefficient given in this paper is only a fitting operation for the real value curve. At the same time, since the structure of an underground garage with different contours has an impact on the three evaluation indexes we selected, the value of the coefficients for different contours may also be inconsistent, which may require more sampling and training through neural networks to come up with the coefficients for each underground garage contour later <cit.>. However, happily, we were able to correctly evaluate the generated underground garage parking space structure according to the evaluation function obtained from the sampling on the 7*9 square contour, as it can be seen that Fig. 5 and Fig. 6 are the manually designed structures considered to be of higher value according to the cognitive design, and Fig. 1 to Fig. 4 are the top four structures of value filtered according to the evaluation function from the results of the algorithm generating 5000 episodes, and it can be seen that the filtered structures, although not perfect, can meet several of the most basic requirements in designing an underground garage parking space, and are indeed a little more valuable than the manually designed structures. § CONCLUSIONS Sarsa, an on-policy TD algorithm, performs well in this paper. It can generate reliable graphs eventually. However, the state set is so large that it can not converge into one solution that reaches the highest repayment. This study demonstrates the feasibility of using reinforcement learning to programmatically generate underground garage grid maps. We have yet to reach a target that can generate a reliable underground garage based on some contour. PCG of underground garage design has a long way to go. In terms of simulation, we are currently able to construct the corresponding 3D underground parking garage and the generated garage has certain details: real-time lighting, ventilation ducts, column network structure, etc.. The current garage details such as various pipe layouts are not yet practical and various scene elements can be further rendered to achieve a more realistic effect. This will allow us to further enhance the accuracy and reliability of the generated underground garage maps. These findings provide valuable insights for the development of intelligent underground garage planning and design tools. In the future, we will extend this work with other AI technologies, such as classification <cit.>, knowledge graphs <cit.>, deep learning <cit.>. IEEEtran
http://arxiv.org/abs/2307.04338v1
20230710043023
Privacy-Preserving Graph Machine Learning from Data to Computation: A Survey
[ "Dongqi Fu", "Wenxuan Bao", "Ross Maciejewski", "Hanghang Tong", "Jingrui He" ]
cs.LG
[ "cs.LG", "cs.CR" ]
Privacy-Preserving Graph Machine Learning from Data to Computation: A Survey Dongqi FuFirst two authors contribute equally to this research.^†,   Wenxuan Bao^†,   Ross Maciejewski^,   Hanghang Tong^†,   Jingrui He^† ^†University of Illinois Urbana-Champaign ^Arizona State University [email protected], [email protected], [email protected], [email protected], [email protected] August 12, 2023 ============================================================================================================================================================================================================================================================================================================================= In graph machine learning, data collection, sharing, and analysis often involve multiple parties, each of which may require varying levels of data security and privacy. To this end, preserving privacy is of great importance in protecting sensitive information. In the era of big data, the relationships among data entities have become unprecedentedly complex, and more applications utilize advanced data structures (i.e., graphs) that can support network structures and relevant attribute information. To date, many graph-based AI models have been proposed (e.g., graph neural networks) for various domain tasks, like computer vision and natural language processing. In this paper, we focus on reviewing privacy-preserving techniques of graph machine learning. We systematically review related works from the data to the computational aspects. We first review methods for generating privacy-preserving graph data. Then we describe methods for transmitting privacy-preserved information (e.g., graph model parameters) to realize the optimization-based computation when data sharing among multiple parties is risky or impossible. In addition to discussing relevant theoretical methodology and software tools, we also discuss current challenges and highlight several possible future research opportunities for privacy-preserving graph machine learning. Finally, we envision a unified and comprehensive secure graph machine learning system. § INTRODUCTION According to the recent report from the United Nations [<https://press.un.org/en/2022/sc15140.doc.htm>], strengthening multilateralism is indispensable to solve the unprecedented challenges in critical areas, such as hunger crisis, misinformation, personal identity disclosure, hate speech, targeted violence, human trafficking, etc. Addressing these problems requires collaborative efforts from governments, industry, academia, and individuals. In particular, effective and efficient data collection, sharing, and analysis are at the core of many decision-making processes, during which preserving privacy is an important topic. Due to the distributed, sensitive, and private nature of the large volume of involved data (e.g., personally identifiable information, images, and video from surveillance cameras or body cameras), it is thus of great importance to make use of the data while avoiding the sharing and use of sensitive information. On the other side, in the era of big data, the relationships among entities have become remarkably complicated. Graph, as a relational data structure, attracts much industrial and research interest for its carrying complex structural and attributed information. For example, with the development of graph neural networks, many application domains have obtained non-trivial improvements, such as computer vision <cit.>, natural language processing <cit.>, recommender systems <cit.>, drug discovery <cit.>, fraud detection <cit.>, etc. Within the trend of applying graph machine learning methods to systematically address problems in various application domains, protecting privacy in the meanwhile is non-neglectable <cit.>. To this end, we consider two complementary strategies in this survey, namely, (1) to share faithfully generated graph data instead of the actual sensitive graph data, and (2) to enable multi-party computation without graph data sharing. Inspired by the above discussion, we focus on introducing two fundamental aspects of privacy-preserving techniques on graphs, i.e., privacy-preserving graph data and graph data privacy-preserving computation. For the data aspect, privacy-preserving graph data as shown in Figure <ref>, we focus on the scenario that when publishing or sharing the graph data is inevitable, how could we protect (e.g., mask, hide, or perturb) sensitive information in the original data to make sure that the published or shared data could survive from the external attackers (e.g., node identify disclosure and link re-identification). Hence, in Section 2, we systematically introduce various attackers [Throughout the paper, we use “attackers” to denote the attacks on graphs. There are also attackers that are designed not for graphs but for Euclidean data, for example. Those are not in the scope of this paper.] first (Subsection 2.1) and what backgroud knowledge they need to execute attacks (Subsection 2.2). Then, we introduce the corresponding protection mechanisms and explain why they can address the challenges placed by attackers (Subsection 2.3). Also, we share some graph statistical properties (other than graph data itself) privacy protection mechanisms (Subsection 2.4). After that, we list several possible challenges for privacy-preserving graph data generation when facing complex structures and attributes, e.g., time-evolving graphs and heterogeneous information graphs (Subsection 2.5). For the computation aspect, graph data privacy-preserving computation, we focus on the multi-party computation scenario where the input data is structured, distributed over clients, and exclusively stored (i.e., not shareable among others). Here, federated learning can be a quick-win solution. However, relational data structures (i.e., graphs) bring a significant challenge (i.e., non-IIDness) to the traditional federated learning setting. This means that the data from intra-clients and/or inter-clients can violate the independent and identically distributed assumption (i.e., the i.i.d. assumption) due to the presence of the complex graph features, whose data complexity hinders many existing federated learning frameworks from getting the optimal performance. Motivated by this observation, in Section 3, we first discuss the adaption of federated learning on graphs and the corresponding challenge from non-IIDness brought by graphs (Subsection 3.1), then we introduce how nascent graph federated learning research works to address the non-IIDness issues from three levels, i.e., graph-level federated learning (Subsection 3.2), subgraph-level (Subsection 3.3), and node-level (Subsection 3.4). Then, we list several challenges and promising research directions, including model heterogeneity and avoiding cross-client transmission (Subsection 3.5). After we introduce privacy-preserving graph data and graph data privacy-preserving computation with their own methodologies, advances, software tools, limitations, and future directions. In Section 4, we envision the necessity of combing these two directions into privacy-preserving graph data privacy-preserving computation to meet any possibility of leaking sensitive information, to further achieve a comprehensive, well-defined, and end-to-end graph machine learning system. Finally, the paper is concluded in Section 5. Relation with Previous Studies. For the privacy-preserving graph data, we systematically review the privacy attackers and the corresponding privacy protection techniques, which takes a balance of classic methods <cit.> and emerging solutions <cit.>, such as topology perturbation methods, deep generation methods, etc. Beyond that, we extend the privacy-preserving techniques review from the data level to the computation level, i.e., the graph data privacy-preserving computation within the federated learning framework. Most of the existing federated learning reviews do not primarily concentrate on graph federated learning <cit.>. Recently, two survey papers <cit.> introduce two problem settings in graph federated learning and their corresponding techniques. They exclusively focus on graph federated learning solutions and ignore the connections to traditional federated learning. Thus, we start from various application scenarios and provide a comprehensive classification and exposition of graph federated learning. While our focus primarily revolves around graph federated learning, we also highlight its connections and distinctions to traditional federated learning, aiming to present the big picture of this field. In addition to reviewing the two aspects (i.e., privacy-preserving graph data and graph data privacy-preserving computation), we also discuss the necessity and possibility of combining these two directions and propose several promising future research directions. § PRIVACY-PRESERVING GRAPH DATA As for making privacy-preserving graph data to publish or share, the ultimate goal is to successfully protect the published graph data from various attacks from adversaries or attackers. To this end, we first introduce the different kinds of attackers, such as node identity disclosure or sensitive link re-identification in Subsection 2.1 and necessary background knowledge in Subsection 2.2. Then, we introduce how the corresponding privacy-preserving mechanisms are proposed, such as several of them being deliberately designed to defend against certain attackers and some of them being general protections and not aiming at specific attacks, in Subsection 2.3. The taxonomy is shown in Figure <ref>. §.§ Privacy Attackers on Graphs According to <cit.>, what the attackers aim to attack is that they (1) want to learn whether edges exist or not between specific target pairs of nodes and also (2) want to reveal the true identities of targeted users, even from just a single anonymized copy of the graph, with a surprisingly small investment of effort. §.§.§ Category of Attackers Attackers can be classified into the active attackers and passive attackers <cit.>. The first category is active attackers, where the core idea is that the attackers actively plant certain structures into the graph before it is being published. Then, the attackers can identify victims in the published graph by locating the planted structures. For example <cit.>, the attackers create a subgraph H containing k nodes and then use H to connect b target nodes in the original graph G (subgraph H is better to be unique and has the property to be recovered in the published graph). After the original graph G is privacy-preserved (e.g., mask and disturb connections) and published as G', the attackers try to find H in G' and then determine those b nodes. Active attackers usually need to access the original graph beforehand and then make corresponding active actions like creating new nodes, linking new edges, and planting subgraphs. The planting and recovery operations are usually computationally costly <cit.>. Therefore, another direction points to passive attacks and defense. Passive attackers are based on the fact or the assumption that most entities (e.g., nodes and edges) in graphs usually belong to a unique, small identifiable graph. Then, different from active attackers, passive ones do not need to create new nodes and edges in the original but mostly rely on the observation of the published graph to identify victims. In the initial proposal of passive attacks <cit.>, a passive attacker (e.g., a node in a social network) needs to collude with other (k-1) nodes on the original graph, and the coalition needs to know the external information (e.g., their 1-hop neighbors' name in the social network), such that they can reconnect on the published graph to identify the victims. Here, we expand the scope of passive attacks to include the attackers whose core is observation plus little external information. For example, in <cit.>, an attacker knows the external background information like “Greg is connected to at least two nodes, each with degree 2” and tries to observe the candidate of plausible Greg in the published social network. §.§.§ Goal of Attackers The ultimate goals of most graph privacy attackers can be roughly divided into disclosing the node identity (e.g., name, DOB, and SSN in the social network) and the link existence (e.g., sensitive connections in the social network) <cit.>. Next, we formally introduce the general definition of these two goals. Node Identity Disclosure. The node identity disclosure problem often arises from the scenario that the attackers aim to identify a target node identity in the published graph (usually, which has been anonymized already). For example, in a published social network with usernames masked already, the node identity disclosure aims to identify which node is Greg <cit.>. To be more specific, the identity disclosure can be detailedly divided into node existence disclosure (i.e., whether a target node existed or not in a published graph), node property disclosure (i.e., partial features of a target node are disclosed like its degree, distance to the center, or even sensitive labels, etc) <cit.>. Link Re-Identification. In a given graph, edges may be of different types and can be classified as either sensitive or not. Some links (i.e., edges) are safe to release to the public, such as classmates or friendships. And some links are sensitive and should maintain private but not published, like the personal disease records with hospitals. The problem of link re-identified is defined as inferring or predicting sensitive relationships from anonymized graphs <cit.>. Briefly speaking, the adversary (or attacker) achieves the goal when it is able to correctly predict a sensitive link between two nodes. For example, if the attacker can figure out which there is a transaction between two users, given the properties of the released financial graph. Also, there are some detailed categorizations of the line re-identification other than the link existence, such as the link weight and link type or labels <cit.>. Compared with active attackers, passive attackers are typically efficient in executing for adversaries and do not need to interact with the original graph beforehand very much. Thus, within the scope of passive attackers, achieving those attacking goals (node identity disclosure or link re-identification) relies on the observation of the published graph and certain external background knowledge to further identify victims.[Node identity disclosure and link re-identification can also be achieved in active ways <cit.>, but in the paper, we focus on introducing the passive manners that achieve those goals.] Next, we focus on introducing what requirements passive attackers need to execute attacks passively. §.§ Background Knowledge for Passive Attacks Here, we first discuss some background knowledge that could contribute to the goal of node identity disclosure. Then, we list some background knowledge that could contribute to sensitive link re-identification attacks. §.§.§ Background Knowledge for Node Identity Disclosure In general, the background knowledge for achieving node identity disclosure is to help them to detect the uniqueness of victims (i.e., nodes in the published graph) and thus narrow down the scope of candidate sets to increase the successful attack probability. For example, assume that the attackers know some background knowledge ℋ about a target node, after that, the attackers observe the published graph and find 2 candidates satisfying the condition (i.e., ℋ), then the attackers have 50% confidence to reveal the identity of that target node in the published graph. Next, we introduce some methods to acquire background knowledge. Vertex Refinement Queries <cit.>. These are interactive queries, which describe the local structure of the graph around a target node x. The initial query in vertex refinement queries is denoted as ℋ_0(x) that simply returns the label of node x in the labeled graph (or a constant ϵ in the unlabeled graph). And ℋ_1(x) returns the degree of node x. Then, iteratively, ℋ_i(x) is defined as the multiset of ℋ_i-1(·) queries on 1-hop neighbors of node x, which can be expressed as follows. ℋ_i(x) = {ℋ_i-1(z_1), ℋ_i-1(z_2), …, ℋ_i-1(z_d_x)} where d_x is the degree of node x. For example, in a social network, ℋ_2(Bob)={1,1,4,4} means that Bob has four neighbors their degrees are 1, 1, 4, and 4, respectively. Subgraph Queries <cit.>. These queries assert the existence of a subgraph around a target node. Compared with the above vertex refinement queries, subgraph queries are more general (i.e., the information is not exclusively occupied to a certain graph structure) and flexible (i.e., informativeness is not limited by the degree of a target node). In brief, the adversary is assumed capable of gathering some fixed number of edges around a target node x and figuring out what subgraph structure those collected edges can form. For example, still targeting Bob in a social network, when collecting 3 edges, attackers can find 3 distinct neighbors. And collecting 4 edges can find a tree rooted by Bob. Those existences of structures form H such that attackers can use them to reveal the identity of Bob. Also, different searching strategies can result in different subgraph structures. For example, based on collecting 3 edges from Bob, breadth-first exploration may result in a star subgraph, and depth-first exploration may end up with a three-node-line. We refer to <cit.>, where a range of searching strategies are tested to empirically illustrate the descriptive power of background knowledge. Hub Fingerprint Queries <cit.>. First of all, a hub stands for a node that has a high degree and a high betweenness centrality (i.e., the proportion of shortest paths in the graph that include that node) in the graph. Then, a hub fingerprint is the description of a node's connections to hubs. To be more specific, for a target node x, the corresponding hub fingerprint query ℋ_i(x) records the shortest distance towards each hub in a graph. In ℋ_i(x), i is the limit of measurable distance. For example, ℋ_1(Bob) = (1,0) means Bob is 1 distance away from the first hop and not connected to (or 1 distance non-reachable from) the second hub. And, ℋ_2(Bob) = (1,2) means that Bob is 1 distance away from the first hop and 2 distance away from the second hub. Neighborhood Relationships Queries <cit.>. Targeting a node, if an adversary has background knowledge about its neighbors and the relationship among the neighbors, then the victim can be identified in the anonymized graph. To be specific, the neighborhood relationship query rely more on the isomorphism of the ego-graph (i.e., 1-hop neighbors) of a target node to reveal its identity, compared with iterative vertex refinement query <cit.> and general subgraph query <cit.>. For example, in a social network, if Bob has two close friends who know each other (i.e., are connected) and two close friends who do not know each other (i.e., are not connected), then this unique information obtained by the adversary can be used to find Bob in the published anonymized graph. §.§.§ Background Knowledge for Link Re-Identification Link Prediction Probabilistic Model <cit.>. This probabilistic model is proposed to determine whether a relationship between two target nodes. And different kinds of background information (i.e., observation) can be leveraged to formalize the probabilistic model, such as (1) node attributes, e.g., two social network users who share the same interest are more likely to be friends; (2) existing relationships, e.g., two social network users in the same community are more likely to be friends; (3) structural properties, e.g., the high degree nodes are more likely to connect in a graph; and (4) inferred relationships (i.e., a complex observation that is more likely based on the inference of the invisible relationship), e.g., two social network users are more likely to be friends if they both are close friends of a third user. Mathematically, those above observations can be expressed for predicting the existence of a sensitive relation between node i and node j as P(e^s_ij|O), where e^s_ij stands for the sensitive relationship and O consists of several observations {o_1, …, o_n}. For example, if we use the second kind of information (i.e., existing relationships), then {o_1, …, o_n} is a set of edges between node i and node j with the edge type other than s, denoted as e^l_ij and l ∈{1, …, n} is the index of other edge relationships. To solve out P(e^s_ij|O), the noisy-or model <cit.> can be used as suggested by  <cit.>, where each observation o_l∈{o_1, … , o_n} is considered as independent with each other and parameterised as λ_l∈{λ_1, … , λ_n}. Moreover, there is a leak parameter λ_0 to capture the probability that the sensitive edge is there due to other unmodeled reasons. Hence, the probability of a sensitive edge is expressed as follows. P(e^s_ij = 1 | o_1, …, o_n) = 1 - ∏_l=0^n(1- λ_l) where s in e^s_ij is the indicator of sensitive relationship, and the details of fitting the values of λ_l can be found in <cit.>. Randomization-based Posterior Probability <cit.>. To identify a link, this observation is based on randomizing the published graph G' and counting the possible connections over a target pair of nodes i and j. And those countings are utilized for the posterior probability to determine whether there is a link between nodes i and j in the original graph G. Formally, the posterior probability for identifying the link e_ij in the original graph G is expressed as follows. P(e_ij = 1 | G'_s) = 1/N∑^N_s=11 (G'_s(i,j) == 1) where the attacker applies a certain randomization mechanism on the published graph G' N times to get a sequence of G'_s, and s ∈{1, …, N}. In each G'_s, if there is an edge connects the target nodes i and j, then the indicator function 1 (G'_s(i,j) == 1) will count one. §.§ Privacy-Preserving Mechanisms Here, we discuss some privacy-preserving techniques that are deliberately designed for specific attackers and also some general protection techniques that are not targeting attackers but can be widely applied. §.§.§ Protection Mechanism Designed for Node Identity Dislosure In general, the protection mechanisms are proposed to enlarge the scope of candidates of victims, i.e., reduce the uniqueness of victims in the anonymized graphs. k-degree Anonymization <cit.>. The motivation for k-degree anonymization is that degree distribution is highly skewed in real-world graphs, such that it is usually effective to collect the degree information (as the background knowledge) to identify a target node. Therefore, this protection mechanism aims to ensure that there at least exist k-1 nodes in the published graph G', in which k-1 nodes share the same degree with any possible target node x. In this way, it can largely prevent the node identity disclosure even if the adversary has some background knowledge about degree distribution. To obtain such anonymized graph G', the method is two-step. First, for the original graph G with n nodes, the degree distribution is encoded into a n-dimensional vector 𝐝, where each entry records the degree of an individual node; And then, based on 𝐝, the authors proposed to create a new degree distribution 𝐝', which is k-anonymous with a tolerated utility loss (e.g., isomorphism cost) instanced by the L_1 distance between two vectors 𝐝 and 𝐝'. Second, based on the k-anonymous degree vector 𝐝', the authors proposed to construct a graph G' whose degree distribution is identical to 𝐝'. k-degree Anonymization in Temporal Graphs <cit.>. For temporal graphs (i.e., graph structures and attributes are dependent on time <cit.>), this method aims to ensure that the temporal degree sequence of each node is indistinguishable from that of at least k-1 other nodes. On the other side, this method also tries to preserve the utility of the published graph as much as possible. To achieve the k-anonymity, the proposed method first partition n nodes in the original temporal graph G into m groups using k-means based on the distance of temporal degree vectors 𝐝 of each node, which is a T-dimensional vector records the degree of a node at different timestamp t.To realize the utility, constrained by the cluster assignment, the method refines 𝐝 of each node into 𝐝' while minimizing the L_1 distance between matrices 𝐃 and 𝐃' (which are stacks of 𝐝 and 𝐝'). After that, the anonymized temporal graph G' is constructed by 𝐃' to release for each timestamp individually. k-degree Anonymization in Knowledge Graphs <cit.>. Different from the ordinary graph, the knowledge graph has rich attributes on nodes and edges <cit.>. Therefore, the k-degree is upgraded with the k-attributed degree that aims to ensure a target node in the anonymized knowledge graph has k-1 other nodes who share the same attributes (i.e., node level) and degree (i.e., edge level) <cit.>. Then the k-degree anonymization solution gets upgraded in <cit.>, which aims to solve the challenge when the data provider wants to continually publish a sequence of anonymized knowledge graphs (e.g., the original graph needs to update and so the anonymized does). Then, in <cit.>, the k-ad (short for k-attributed degree) is extended to k^ω-ad, which targets to defend the node identity disclosure in the ω continuous anonymized versions of a knowledge graph. The basic idea is to partition nodes into clusters based on the similarity of node features and degree; Then, for the knowledge graph updates (like newly inserted nodes or deleted nodes), manual intervention is applied (e.g., adding fake nodes) to ensure the k^ω anonymity; Finally, the anonymized knowledge graph gets recovered from the clusters. This initial idea <cit.> gets further formalized and materialized in <cit.>. k-neighborhood Anonymization <cit.>. This protection is proposed to defend the node identity disclosure when the adversary comprehends the background knowledge about neighborhood relationships of a target node (i.e., Neighborhood Relationship Queries discussed in Subsection 2.2.1). The core idea is to insert nodes and edges in the original graph G to get an anonymized graph G', such that a target node x can have multiple nodes whose neighborhood structure is isomorphic in G'. Given a pair node v and u in graph G (suppose node v is the target), the authors first propose the neighborhood component and use DFS search to encode the ego-net Neighbor_G(v) and Neighbor_G(u) into vectors. Then, by comparing the difference between Neighbor_G(v) and Neighbor_G(u), the authors then greedy insert missing (labeled) nodes and edges (into Neighbor_G(v) or Neighbor_G(u)) to make Neighbor_G(v) and Neighbor_G(u) isomorphic. Those inserted nodes and edges make G into G'. k-automorphism Anonymization <cit.>. This method is proposed for the structural queries by attackers, especially for the subgraph queries (as discussed in Subsection 2.2.1). Basically, given an original graph G, this method produces an anonymization graph G' to publish, where G is the sub-graph of G' and G' is k-automorphic. To do this, the authors propose the KM algorithm, which partitions the original graph G and adds the crossing edge copies into G, to further convert G into G'. Hence, the G' can satisfy the k-different match principle to defend the subgraph query attacks, which means that there are at least k-different matches in G' for a subgraph query, but those matches do not share any nodes. §.§.§ Protection Mechanism Designed for Link Re-Identification The general idea of solutions here is proposed to reduce the confidence of attackers (which usually can be realized by a probabilistic model) for inferring or predicting links based on observing the published anonymized graphs. Intact Edges <cit.>. This solution is straightforward and trivial. Given the link re-identification attacker aims to predict a target link between two nodes, and the corresponding link type (i.e., edge type) is denoted as s, then the intact edges strategy is to remove all s type edges in the original graph G and publish the rest as the anonymized graph G'. Those remaining edges are so-called intact. Partial-edge Removal <cit.>. This approach is also based on removing edges in the original graph G to publish the anonymized graph G'. Partial-edge removal does not exhaustively remove all sensitive (indexed by s type) edges in G, but it removes part of existing edges. Those removed existing edges are selected based on the criteria of whether their existence contributes to the exposure of sensitive links, e.g., they are sensitive edges, they connect high-degree nodes, etc. Even those removals can be selected randomly. Cluster-edge Anonymization <cit.>. This method requires that the original graph G can be partitioned into clusters (or so-called equivalence classes) to publish the anomymized graph G'. The intra-cluster edges are removed to aggregate a cluster into a supernode (i.e., the number of clusters in G is now the number of nodes in G'), but the inter-cluster edges are reserved in G'. To be more specific, for each edge whose edge type is not sensitive (i.e., not s type), if it connects any two clusters, it will be reserved in G'; otherwise, it will be removed. It can be observed that this method needs the clustering pre-processing, which also means that it can cooperate with the node anonymization method. For example, the k-anonymization <cit.> can be applied on the original graph G first to identify the equivalence classes, i.e., which nodes are equivalent in terms of k-anonymization (for example, nodes who have the same degree). Cluster-edge Anonymization with Constraints <cit.>. This method is the upgraded version of the previous cluster-edge anonymization, and it is proposed to strengthen the utility of the anonymized graph G' by adjusting the edges between clusters (i.e., equivalence classes). The core idea is to require the equivalence class nodes (i.e., cluster nodes or supernodes in G') to have the same constraints as any two nodes in the original graph G. For example, if there can be at most two edges of a certain type between nodes in G, there can be at most two edges of a certain type between the cluster nodes in G'. §.§.§ General Privacy Protection Mechanisms Besides the protections that are designed deliberately for the node identity disclosure and link re-identification risks, there are also other protection mechanisms that are not designed for a specific kind of attacker but for the general and comprehensive scenario, such as randomized mechanisms with constraints and differential privacy schema. Next, we will discuss these research works. Graph Summarization <cit.>. This method aims to publish a set of anonymized graphs G' given an original graph G, through the graph summarization manner. To be specific, this method relies on a pre-defined partitioning method to partition the original graph G into several clusters, then each cluster will just serve as a node in the anonymized graph G'. The selection of connecting nodes in G' results in the variety of G', which means that a sequence of G' will appear with a different edge connecting strategy. The detailed connection strategy can be referred to  <cit.>. Switching-based Graph Generation <cit.>. Here, the authors aim to publish the anonymized graph G' that should also preserve the utility of the original graph G. Therefore, they propose the graph generation method based on the switching operations that can preserve the graph features. Moreover, the switching is realized in an iterative Monte Carlo manner, each time two edges (a, b) and (c, d) are selected. Then they will switch into (a, d) and (b, c) or (a, c) and (b, d). The authors constrain that two selected edges are switchable if and only if the switching generates no more edges or self-edges, such that the overall degree distribution will not change. After sufficient Monte Carlo switching operations, the authors show that the original graph features (e.g., eigenvalues of adjacency matrix, eigenvectors of Laplacian matrix, harmonic mean of geodesic path, and graph transitivity) can be largely preserved in the anonymized graph G'. Spectral Add/Del and Spectral Switch <cit.>. The idea of this method starts from Rand Add/Del and Rand Switch. Rand Add/Del means that the protection mechanism randomly adds an edge after deleting another edge and repeats multiple times, such that the total number of edges in the anonymized graph will not change. Rand Switch is the method that randomly switches a pair of existing edges (t,w) and (u, v) into (t,v) and (u,w) (if (t,v) and (u,w) do not exist in the original graph), such that the overall degree distribution will not change. In <cit.>, the authors develop the spectrum-preserving randomization methods Spectral Add/Del and Spectral Switch, which preserve the largest eigenvalue λ_1 of the adjacency matrix 𝐀 and the second smallest eigenvalue μ_2 of the Laplacian matrix 𝐋 = 𝐃 - 𝐀. To be specific, the authors first investigate which edges will cause the λ_1 and μ_2 increase or decrease in the anonymized graph and then select the edges from different categories to do Rand Add/Del and Rand Switch to control the values of λ_1 and μ_2 not change too much in the anonymized graph. RandWalk-Mod <cit.>. This method aims to inject the connection uncertainty by iteratively copying each existing edge from the original graph G to an initial null graph G' with a certain probability, guaranteeing the degree distribution of G' is unchanged compared with G. Starting from each node u in the original graph G, this method first gets the neighbor of node u in G denoted as 𝒩_u. Then for each node in 𝒩_u, this method runs multiple random walks and denotes the terminated node in each walk as z. Finally, RandWalk-Mod adds the edge (u,z) to G' with certain probabilities under different conditions (e.g., 0.5, a predefined probability α, or 0.5d_u - α/d_u-1, where d_u is the degree of node u in G). Next, we introduce an important component in the graph privacy-preserving techniques, i.e., differential privacy <cit.>. The general idea of differential privacy is that two adjacent graphs (e.g., one node/edge difference between two graphs) are indistinguishable through the permutation algorithm ℳ. Then, this permutation algorithm ℳ satisfies the differential privacy. The behind intuition is that the randomness of ℳ will not make the small divergence produce a considerably different distribution, i.e., the randomness of ℳ is not the cause of the privacy leak. If the indistinguishable property is measured by ϵ, then the algorithm is usually called ϵ-differential privacy algorithm. The basic idea can be expressed as follows. Pr[ℳ(G) ∈ S]/Pr[ℳ(G) ∈ S]≤ e^ϵ where G and G' are adjacenct graphs, ℳ is the differential privacy algorithm, and ϵ is the privacy budget. The above equation illustrates that the probability of the same output range is almost equivalent. Within the context of graph privacy, the differential privacy algorithm can be roughly categorized as edge-level differential privacy and node-level differential privacy. Given the input original graph G, the output graph of the differential algorithm ℳ(G) can be used as the anonymized graph G' to publish. Edge-level Differential Privacy Graph Generation. We first introduce the edge-level differential privacy algorithms, which means that the privacy algorithm can permute adjacent graphs (e.g., one edge difference) indiscriminately. * DP-1K and DP-2K Graph Model <cit.>. This edge-level differential privacy algorithm is proposed with the utility preserving concern of complex degree distribution. Here, 1k-distribution denoted by P_1(G) is the ordinary node degree distribution in graph G, e.g., the number of nodes having 1 degree is 10 then P_1(1) = 10, the number of nodes having 2 degrees is 5 then P_1(2) = 5, etc. 2K-distribution denoted by P_2(G) is the joint graph distribution in graph G, e.g., the number of edges connecting an i-degree node and a j-degree node, with iterating i and j. And P_2(2,3) = 6 means that the number of edges in G connecting a 2-degree node and a 3-degree node is 6. Hence, DP-1K (or DP-2K) Graph Model first computes the 1K-(or 2K-) degree distribution P_1(G) (or P_2(G)) and then permutes the degree distribution under the edge-level DP to obtain the P_1(G)' (or P_2(G)'). Finally, an off-the-shelf graph generator (e.g., <cit.>) is called to build the anonymized graph G' based on P_1(G)' (or P_2(G)'). * Local Differential Privacy Graph Generation (LDPGEN) <cit.> is motivated by permuting the connection distribution, i.e., proportionally flipping the existing edge to non-existing and vice versa. To make the generated graph preserve the original utility, LDPGEN <cit.> first partitions the original graph G into the disjoint clusters and adds Laplacian noise on the node's degree vector in each cluster, which guarantees the local edge-level differential privacy. After that, the estimator is used to estimate the connection probability of intra-cluster edges and inter-cluster edges based on the noisy degree vectors, such that the anonymized graph G' is generated. * Differentially Private Graph Sparsification <cit.>. On the one hand, this method constrains the number of edges in the anonymized graph G' is less than the original graph G to a certain extent. On the other hand, the method requires that the Laplacian of the anonymized graph G' is approximated to the original graph G (i.e., see Eq.1 in <cit.>). The two above objectives are unified into an edge-level differential privacy framework. The new graph G' is then obtained by solving an SDP (i.e., semi-definite program) problem. * Temporal Edge-level Differential Privacy. In <cit.>, two temporal graphs are adjacent if they only differ in one update (i.e., the existence and non-existence of a temporal edge, different weights of an existing temporal edge). Based on the Priv-Graph algorithm (i.e., adding noise to graph Laplacian matrix), Sliding-Priv-Graph <cit.> is proposed to (1) take recent updates and ensure the temporal edge-level differential privacy and (2) meet the smooth Laplacian property (i.e., the positive semi-definite of consecutive Laplacian matrices). Moreover, in <cit.>, the authors distinguish the edge-adjacency and node-adjacency in the temporal graphs. Two temporal graphs are node-adjacent (or edge-adjacent) if they only differ in one node (or edge) insertion or deletion. * Deep Graph Models with Differential Privacy. Following the synergy of deep learning and differential privacy <cit.>, another way to preserve privacy is targeting the gradient of deep graph learning models. In <cit.>, a deep graph generative model called DPGG_AN is proposed under the edge-level differential privacy constraints, where the privacy protection mechanism is executed during the gradient descent phase of the generation learning process, by adding Gaussian noise to the gradient of deep learning models. Node-level Differential Privacy Graph Generation. Compared with edge-level differential privacy, node-level differential privacy is relatively difficult to be formalized and solve. In <cit.>, authors contribute several theoretical node-level differential privacy solutions such as Flow-based Lipschitz extension and LP-based Lipschitz extensions. But they all focus on realizing part of the graph properties instead of the graph data itself, such as anonymized degree distribution, subgraph counting, etc. The same kind of research flavor also appeared in relevant node-level differential privacy works like <cit.>. Again, differential privacy mechanisms on graphs is a large and comprehensive topic, a more detailed introduction and extensive literature review can be found in <cit.>. §.§ Other Aspects of Graph Anonymization Here, we would also like to review several graph anonymization techniques, but the difference from the majority mentioned above is that: they are not publishing the anonymized graph G' but anonymize some non-trivial and graph statistics of the original graph G and release them to the public <cit.>. The central requirement for protecting the graph statistics is that some scalar graph parameters are essential to describe the graph topology (e.g., degree distributions) or even reconstruct the graph topology (e.g., the number of nodes and edge connection probability in the Erdos-Renyi graph). To this end, some methods focus on protecting the important graph parameters and their statistics before releasing them. For example, the spectrum of a graph (i.e., eigen-decomposition of the graph Laplacian matrix) can preserve many important graph properties such as topological connections, low-pass or high-pass graph single filters, etc. Therefore, in <cit.>, the authors proposed to permute the eigen-decomposition under the differential privacy and then release the permuted parameters. To be specific, given the original eigenvalues and eigenvectors, certain calibrated random noises are sampled and added to them under the differential privacy constraint. Under the same protection mechanism, i.e., differential privacy, the protection goal is set to be the number of occurrences of subgraphs in <cit.>, the sequence of degree distribution in directed graphs and undirected graphs in <cit.>, and the edge connection probability of random graphs in <cit.>. §.§ Challenges and Future Opportunities After introducing different graph anonymization techniques, we would like to share some open questions and corresponding challenges. §.§.§ Preserving Privacy for Temporal Graphs As discussed above, most privacy-preserving graph anonymization methods still consider the input graphs as static. However, in complex real-world scenarios, the graphs are usually evolving over time <cit.>, which brings critical challenges to the current privacy-preserving static graph generation process. In other words, the time domain enriches the node attribute dimension and may also dictate the attribute distribution, which leads to increased exposure risk. For example, some graphs contain multiple dynamics and accurately representing them could contribute to graph tasks like classification <cit.>. But, the existence of various dynamics increases the probability of being unique and enlarges the leaking risk. §.§.§ Preserving Privacy for Heterogeneous Graphs During the node identity disclosure and link re-identification, it can be observed that the majority of background knowledge is solely from structural queries, which is already forceful enough. In heterogeneous graphs <cit.>, the abundant node and edge features increase the risk of leaking sensitive information and bring challenges to protection mechanisms, especially the heterogeneous graphs start to evolve <cit.>. To the best of our knowledge, how to generate privacy-preserving heterogeneous or temporal graphs remains open. * What kind of feature information is sensitive in heterogeneous or time-evolving graphs and should be hidden in the generated graph? * If the corresponding sensitive information is determined, what techniques are effective for protecting structures and features in the heterogeneous or time-evolving environment? * Last but not least, if the corresponding protection mechanism is designed, how to maintain the generation utility simultaneously with privacy constraints? § GRAPH DATA PRIVACY-PRESERVING COMPUTATION In recent years, graph machine learning has become increasingly popular due to the abundance of graph-structured data in various domains, such as social networks, recommendation systems, and bioinformatics. However, graph data is usually distributed in multiple data sources, and each data owner does not have enough data to train satisfactory machine learning models, which require a massive amount of graph data. For example, biochemical industries may wish to collaboratively train a graph neural network model to predict the property of molecules. While we introduce one solution with privacy-preserving graph data generation in the last section, another solution is to enable multi-party computation without exchanging raw data. In this section, we introduce federated learning (FL) <cit.>, a machine learning system where multiple clients (i.e., data owners) collaboratively train machine learning models without exchanging their raw data. In particular, we first introduce the framework of federated learning and its applications with graph data in Subsection <ref>. Then we introduce important FL algorithms under three representative graph federated learning scenarios: graph-level FL (Subsection <ref>), subgraph-level FL (Subsection <ref>), and node-level FL (Subsection <ref>). Finally, we summarize the challenges of future opportunities of graph FL in Section <ref>. §.§ Framework and Applications of Federated Learning Federated learning (FL) <cit.> is a distributed learning system where multiple clients (i.e., data sources) collaborate to train a machine learning model under the orchestration of the central server (i.e., the service provider), while keeping their data decentralized and private <cit.>. This subsection provides an exposition on the FL framework, followed by an overview of the application of federated learning on graph data. §.§.§ Federated Learning Framework A typical FL framework has one central server and N clients, each with its own dataset 𝒟_i. The main steps can be summarized as follows: * Parameter broadcasting. The server broadcasts the current global model to (selected) clients. * Local update. Each client locally trains its local model. * Parameter uploading. Each client sends upload the model update back to the server. * Model aggregation. The server aggregates the model updates collected from clients and updates the global model. * Repeat: Steps 1-4 are repeated for multiple communication rounds until the global model converges to satisfactory performance. One of the most popular FL algorithms is FedAvg <cit.>. In each communication rounds, the server randomly selects a subset of clients, and broadcasts the global model to them. Each client locally updates the model with multiple iterations of stochastic gradient descent, and uploads its local model back to the server. Finally, the server computes a weighted average of local model parameters, and updates the global model parameters. Algorithm <ref> gives the pseudo-code of FedAvg. Notice that in FedAvg, local data never leaves the client side. Besides FedAvg, most of the FL algorithms strictly follow the aforementioned training protocol <cit.>, or roughly follow it with a few modifications <cit.>. FL protects client privacy in two main ways. Firstly, instead of transmitting the raw data, FL transmits only the model parameters, which are updated based on the local data of each client. By doing so, FL ensures that sensitive data remains on the client's device and is not transmitted to the central server and other clients. Secondly, the model parameters uploaded to the server only reveal the distribution of local data, rather than individual data points. This approach helps to maintain privacy by obscuring the specific data points used to train the model. FL can be equipped with differential privacy mechanisms <cit.> to enhance privacy protection. As described in the last section, differential privacy is a technique that involves adding noise to data in order to obscure individual contributions while still maintaining overall data patterns. However, different from graph generation, where the noise is added to the data (e.g., node feature, edges, etc), in the context of FL, the noise is added to the uploaded and downloaded model parameters. This ensures that even if an attacker were to obtain the model parameters, they would not be able to accurately infer the raw data from the model parameter. By adding moderate noise to the parameters, the model's accuracy may be slightly reduced, but the overall performance remains comparable to non-private models. In summary, by using differential privacy mechanisms, FL can achieve even better privacy protection by making it harder for attackers to identify the sensitive data contributed by individual clients. §.§.§ Application of Graph Federated Learning In this part, we introduce important applications of federated learning on graph data. Roughly, we survey three representative application scenarios: graph-level FL, subgraph-level FL, and node-level FL. * Graph-level FL: Each client has one or several graphs, while different graphs are isolated and independent. One typical application of graph-level FL is for drug discovery <cit.>, where biochemical industries collaborate to train a graph neural network model predicting the property of molecules. Each molecule is a graph with basic atoms as nodes and chemical bonds as edges. * Subgraph-level FL: Each client has one graph, while each graph is a subgraph of an underlying global graph. One representative application of subgraph-level FL is for financial transaction data <cit.>. Each FL client is a bank that keeps a graph encoding the information of its customers, where nodes are individual customers and edges are financial transaction records. While each bank holds its own graph, customers in one bank may have connections to customers in another bank, introducing cross-client edges. Thus, each bank's own graph is a subgraph of an underlying global graph. * Node-level FL: Each client is a node of a graph, and edges are the pairwise relationships between clients, e.g., their distribution similarity or data dependency. One example is the smart city, where clients are traffic sensors deployed on the road and linked to geographically adjacent sensors. While clients form a graph, each client can make an intelligent decision based on the collected road conditions and nearby devices. Figure <ref> illustrates the three application scenarios above. Next, we investigate each application scenario in the following three subsections individually. §.§ Graph-level FL In this subsection, we investigate graph-level FL. Graph-level FL is a natural extension of traditional FL: while each client has one or several graphs, different graphs are isolated and independent. The goal of each client is to train a graph neural network (GNN) model for a variety of local tasks, e.g., node-level (e.g., node classification), link-level (e.g., edge prediction), or graph-level (e.g., graph classification). One of the most representative applications of graph-level FL is drug discovery, where graphs are molecules with atoms as nodes and chemical bonds as edges. Each FL client can be a pharmaceutical corporation that owns molecule data. Multiple corporations collaborate to train better model for molecular property prediction. The biggest challenge of graph-level FL is the non-identical distribution among different clients' data. Since each client in FL collects their local data individually, their local datasets usually have a different distribution. For example, different pharmaceutical corporations may focus on different types of molecules. Such heterogeneity among clients' data distributions introduces optimization challenges to FL. Moreover, when clients' distribution is largely different, it might be harmful or even impossible to train one universal global model across all clients. More sophisticated techniques are required to achieve beneficial collaboration. Next, we will introduce algorithms for graph-level FL in two parts: global federated learning and personalized federated learning. Since graph-level FL is a natural extension of traditional FL, we will cover both general FL algorithms and graph FL algorithms. §.§.§ Global Federated Learning Global federated learning (GFL) aims to train a shared global model for all clients. FedAvg <cit.> provides an initial solution for training GNNs with isolated graphs from multiple clients. However, when clients have significantly different underlying distributions, FedAvg needs much more communication rounds for convergence to a satisfactory model, and may converge to a sub-optimal solution <cit.>. This phenomenon of worse convergence is usually explained by weight divergence <cit.>, i.e, even with the same parameter initialization, the model parameters for different clients are substantially different after the first local stochastic gradient descent (SGD) step. With different model parameters, the mean of client gradients can be different from the gradient in centralized SGD, and introduce error to the model loss <cit.>. Data-sharing. To tackle the non-IID challenge to FL optimization, a simple but effective method is to share a small amount of data among clients. <cit.> first explore an association between the weight divergence and the non-IIDness of the data, and propose a method to share a small amount of data among the server and all clients. As a result, the accuracy can be increased by  30% for the CIFAR-10 dataset <cit.> with only 5% globally shared data. <cit.> further improves the privacy of this approach by sharing the average of local data points, instead of raw data. Specifically, each client uploads averaged data, receives averaged data from other clients, and performs Mixup <cit.> data augmentation locally to alleviate weight divergence. However, both methods require modification of the standard FL protocol and transmission of data. Another way to improve privacy is to share synthetic data generated by generative adversarial networks (GANs) <cit.>, instead of the raw data. The synthetic data can be a collection of each client's synthetic data generated with local GANs or generated with one global GAN trained in FL <cit.>. However, it is unclear whether GAN can provide enough privacy, since it may memorize the training data <cit.>. Modifying local update. Another line of research works modifies the local update procedure to alleviate weight divergence without changing the communication protocol of FL. FedProx <cit.> adds a proximal term to the local objective to stabilize the training procedure. The proximal term is the squared L2 distance between the current global model and the local model, which prevents the local model from drifting too far from the global model. SCAFFOLD <cit.> estimates how local updates deviate from the global update, and it then corrects the local updates via variance reduction. Based on the intuition that the global model can learn better representation than local models, MOON <cit.> conducts contrastive learning at the model level, encouraging the agreement of representation learned by the local and global models. §.§.§ Personalized Federated Learning While the aforementioned algorithms can accelerate the model optimization for GFL, one model may not always be ideal for all participating clients <cit.>. Recently, personalized federated learning (PFL) has been proposed to tackle this challenge. PFL allows FL clients to collaboratively train machine learning models while each client can have different model parameters. Clustered FL. In clustered FL, clients are partitioned into non-overlapping groups. Clients in the same group will share the same model, while clients from different groups can have different model parameters. In IFCA <cit.>, k models are initialized and transmitted to all clients in each communication round, and each client picks the model with the smallest loss value to optimize. FedCluster <cit.> iteratively bipartition the clients based on their cosine similarity of gradients. GCFL <cit.> generalizes this idea to graph data, enabling collaborative training with graphs from different domains. Observing that the gradients of GNNs can be fluctuating, GCFL+ <cit.> uses a gradient sequence-based clustering mechanism to form more robust clusters. Personalized Modules. Another prevalent way for PFL is personalized modules. In these works, the machine learning model is divided into two parts: the shared part and the personalized part. The key is to design a model structure suitable for personalization. For example, when a model is split into a feature extractor and classifier, FedPer <cit.> shares the feature extractor and personalizes the classifier, while LG-FedAvg <cit.> personalizes the feature extractor and shares the classifier. Similar techniques in used in FMTGL <cit.> and NGL-FedRep <cit.>. Moreover, PartialFed <cit.> can automatically select which layers to personalize and which layers to share. On graph data, <cit.> observe that while the feature information can be very different, some structural properties are shared by various domains, revealing the great potential for sharing structural information in FL. Inspired by this, they propose FedStar that trains a feature-structure decoupled GNN. The structural encoder is globally shared across all clients, while the feature-based knowledge is personalized. Local Finetuning and Meta-Learning. Finetuning is widely used for PFL. In these works, a global model is first trained with all clients. The global model encodes the information of the population but may not adapt to each client's own distribution. Therefore, each client locally finetunes the global model with a few steps of gradient descent. Besides vanilla finetuning, Per-FedAvg <cit.> combines FL with MAML <cit.>, an algorithm for meta-learning, to improve the performance of finetuning. Similarly, pFedMe <cit.> utilize Moreau Envelopes for personalization. It adds a proximal term to the local finetuning objective, and aims to find a local model near the global model, with just a few steps of gradient descent. GraphFL <cit.> applies a similar meta-learning framework on graph data, addressing the heterogeneity among graph data and handling new label domains with a few new labeled nodes. Multi-task Learning. PFL is also studied within the framework of multi-task learning. MOCHA <cit.> uses a matrix to model the similarity among each pair of clients. Clients with similar distribution will be encouraged to have similar model parameters. FedGMTL <cit.> generalizes this idea to graph data. Similarly, SemiGraphFL <cit.> computes pairwise cosine similarity among clients' hidden representations. As a result, clients with more similar data will have greater mutual influence. However, it requires the transmission of hidden representation. FedEM <cit.> assumes that each client's distribution is a mixture of unknown underlying distributions and proposes FedEM, an EM-like algorithm for multi-task FL. Finally, FedFOMO <cit.> allows each client to have a different mixture weight of local models during the aggregation steps. It provides a flexible way for model aggregation. Graph Structure Augmentation. In the previous works, graph structures are considered as ground truth. However, graphs can be noisy or incomplete, which can hurt the performance of GNNs. To tackle incomplete graph structures, FedGSL <cit.> optimizes the local client's graph and GNN parameters simultaneously. §.§ Subgraph-level FL Similar to graph-level FL, each client in subgraph-level FL holds one graph. However, clients' graphs are a subgraph of a latent large entire graph. In other words, there are cross-client edges in the entire graph, where the two nodes of these edges belong to different clients. The task is usually node-level, while the cross-client edges can contribute to the task. One application of subgraph-level FL is financial fraud detection. Each FL client is a bank aiming to detect potential fraud with transaction data. Each bank keeps a graph of the information of its customers, where nodes are individual customers and edges are transaction records. While each bank holds its own graph, customers in one bank may have connections to customers in another bank, introducing edges across clients. These cross-client edges help to train better ML models. The biggest challenge for subgraph-level FL is to handle cross-client edges. In GNNs, each node iteratively aggregates information from its neighboring nodes, which may be from other clients. However, during local updates in traditional FL, clients cannot get access to the data from other clients. Directly exchanging raw data among clients is prohibited due to privacy concerns. It is challenging to enable cross-client information exchange while preserving privacy. Moreover, when nodes' identities are not shared across clients, the cross-client edges can be missing and stored in none of the clients. Even if we collect clients' local subgraphs, we cannot reconstruct the global graph. In this subsection, we will mainly focus on two scenarios. In the first part, we introduce algorithms when the hidden entire graph is given but stored separately in different clients. In the second part, we consider a more challenging setting: the cross-client edges are missing, and we cannot simply concatenate local graphs to reconstruct the entire graph losslessly. We focus on how to generate these missing edges or missing neighbors for each node. §.§.§ Cross-client Propagation When the cross-client edges are available, the major challenge is to enable cross-client information propagation without leaking raw data. FedGraph <cit.> designs a novel cross-client convolution operation to avoid sharing raw data across clients. It avoids exchanging representations in the first GCN layer. Similarly, FedPNS <cit.> control the number of neighbor sampling to reduce communication costs. FedCog <cit.> proposes graph decoupling operation, splitting local graph to internal graph and border graph. The graph convolution is accordingly divided into two sequential steps: internal propagation and border propagation. In this process, each client sends the intermediate representation of internal nodes to other clients. Considering that directly exchanging feature representations between clients can leak private information. In user-item graphs, FedPerGNN <cit.> design a privacy-preserving user-item graph expansion protocol. Clients upload encrypted item IDs to the trusted server, and the server matches the ciphertexts of item IDs to find clients with overlapping item IDs. DP-FedRec <cit.> uses private set intersection to exchange the edges information between clients and applies differential privacy techniques to further protect privacy. Different from the above methods, FedGCN <cit.> does not rely on communication between clients. Instead, it transmits all the information needed to train a GCN between the server and each client, only once before the training. Moreover, each node at a given client only needs to know the accumulated information about the node's neighbors, which reduces possible privacy leakage. §.§.§ Missing Neighbors For some applications, the cross-client edges can be missing or not stored in any clients. Notice that although each client also holds a disjoint graph in graph-level FL, graph-level FL and subgraph-level FL with missing neighbors are substantially different. For graph-level FL, there are essentially no cross-client edges. For example, there are no chemical bonds between two molecules from different corporations' datasets. However, for subgraph-level FL, the cross-client edges exist, but are missing in certain applications. We may get suboptimal GNN models if ignoring the existence of cross-client edges. Therefore, the major challenge is to reconstruct these missing edges, or reconstruct missing neighbors for each node. FedSAGE <cit.> first defines the missing neighbors' challenge, and proposes a method the generate pseudo neighbors for each node. It uses existing subgraphs to train a neighbors generator and generate one-hop neighbors for each client to mend the graph. Since missing neighbors are generated locally, no feature exchange is required between clients after the local subgraphs are mended. However, the training of neighbor generators requires cross-client hidden representation exchanges. Similarly, FedNI <cit.> uses a graph GAN model to generate missing nodes and edges. §.§ Node-level FL The final application scenario of graph federated learning is node-level. Different from the aforementioned two scenarios, each client in node-level FL can hold any type of data, not restricted to graphs. Instead, the clients themselves are nodes in a graph, while the edges are their pairwise relationship of communication or distribution similarity. One typical application of node-level FL is the Internet of Things (IOT) devices in a smart building <cit.>. Due to bandwidth constraints, it can be costly for each IoT device to communicate with the central server. However, IoT devices in the same local area network can communicate very efficiently. As a result, IoT devices form a graph with pairwise communication availability as edges. Another application is for the smart city <cit.>, where clients are traffic sensors deployed on the road and linked to geographically adjacent sensors. Each device can collect data and make the real-time decision without waiting for the response of cloud servers. Each device needs to make an intelligent decision based on the collected road conditions and nearby devices. In this subsection, we will first introduce algorithms where the graph models communication constraints among clients. In these works, there is no central server, and clients can only exchange information along edges. Then, we will introduce algorithms where the graph models the relationship between clients' distributions. In these works, although a central server is available, the graph among clients models distributional similarity or dependency among clients, potentially contributes to the model performance. §.§.§ Graph as Communication Network Traditional FL relies on a central server to enable communication among clients. Each client trusts the central server and uploads their model update to the server. However, in many scenarios, a trusted central server may not exist. Even when a central server exists, it may be expensive for clients to communicate with the server. Therefore, serverless FL (a.k.a. peer-to-peer FL) has been studied to relieve communication constraints. The standard solution for serverless FL is fully decentralized FL <cit.>, where each client only averages its model parameter with its neighbors. D-FedGNN <cit.> uses these techniques to train GNN models. SpreadGNN <cit.> generalizes this framework to personalized FL, where each client has non-IID data and a different label space. §.§.§ Graph as Distribution Similarities When the central server is available, a graph of clients may still be beneficial when it models distributional relationships among clients. When edges link clients with highly similar distributions, parameter sharing along edges can potentially improve the model performance for both clients. When edges link clients with data dependency, information exchange along edges can even provide additional features for inference. FedGS <cit.> models the data correlations of clients with a data-distribution-dependency graph, and improves the unbiasedness of the client sampling process. Meanwhile, SFL <cit.> assumes a pre-defined client relation graph stored on the server, and the client-centric model aggregation is conducted along the relation graph’s structure. GraphFL <cit.> considers client-side information to encourage similar clients to have similar models. BiG-Fed <cit.> applies graph convolution on the client graph, so each client's prediction can benefit from its neighbors with highly correlated data. Finally, <cit.> designs a client sampling technique considers both communication cost and distribution similarity. Finally, we summarize the official implementation of FL algorithms and useful repositories in Table <ref>. §.§ Challenges and Future Opportunities In this part, we present several limitations in current works and provide open problems for future research. §.§.§ Model Heterogeneity for Graph-Level FL In previous works of graph-level FL, although each FL client usually has different data distribution it is usually assumed that the model architecture is shared across all clients. However, the optimal architecture for different clients can be different. For example, a well-known issue in GNNs is the over-smoothing problem. When the number of graph convolutional layers is higher than the diameter of the graph, GNN models may learn similar representations for all nodes in the graph, which harms the model performance. When each FL clients hold a substantially different size of graphs, it is highly likely that the optimal depth of the GNN model is different for them. §.§.§ Avoiding Cross-Client Transmission for Sub-graph-Level FL Most of the previous subgraph-level FL algorithms highly rely on direct information exchange along cross-client edges. While such operations are natural variants of graph convolution, such operations also raise privacy concerns. Moreover, different from traditional FL where each client downloads aggregated model parameters that reveal the population, feature exchange along the edges can expose information about individuals. It would be beneficial if the cross-client transmission can be avoided without greatly degrading the model. § ENVISIONING In this section, we analyze the current developments and limitations of privacy-preserving graph machine learning, and explain the necessity of combining them. In addition, we identify a number of unsolved research directions that could be addressed to improve the privacy of graph machine learning systems. §.§ Limitation of Current Techniques In the previous two sections, we introduced privacy-preserving graph data generation and computation, respectively. However, both techniques have their own limitations. * For privacy-preserving graph generation, while it can provide good privacy protection for graph data, it also has a significant drawback on model utility. The privacy-preserving techniques applied during data generation are not designed for specific machine learning tasks and may influence the utility of the resulting model. For example, consider a graph with four nodes a, b, c, and d. The nodes a and b have a positive label, while c and d have a negative label. Switching the edges from (a, b), (c, d) to (a, c), (b, d) does not change the degree distribution of the graph, but it changes the graph from a homophilous graph to a heterophilous graph, i.e., edges are more likely to link two nodes with different labels. This change can harm the performance of many GNN models, which are designed to work well with homogeneous graphs <cit.>. It is important to consider the downstream machine learning tasks when designing privacy-preserving techniques for graph data. * For privacy-preserving graph computation, while FL can avoid the transmission of raw data, it has been shown that transmitting raw model parameters or gradients may not provide enough privacy, as attackers can use the gradient or model update to reconstruct private data <cit.>. Moreover, many subgraph-level and node-level federated learning algorithms require the transmission of hidden representations, which can also leak private information. Therefore, protecting the raw data from being reconstructed is essential to federated learning systems. §.§ Combination of Privacy-Preserving Graph Data Generation and Computation To address the limitations of current privacy-preserving techniques, it is essential to combine privacy graph data generation with the graph federated learning frameworks, as shown in Figure <ref>. This approach can provide an effective solution to the privacy preservation issues of graph machine learning models. Specifically, the generated synthetic data is used instead of the real data during the training process. This means that even if the transmitted information is decrypted, it is just from the generated synthetic data and not the real data. The synthetic data can be generated in such a way that it preserves the statistical properties of the original data while ensuring privacy preservation. This can be achieved using various techniques, including differential privacy, homomorphic encryption, and secure multi-party computation. The combination of privacy graph data generation and graph federated learning frameworks has several benefits. First, it ensures privacy preservation during the training process by using synthetic data. Second, it enables the transfer of graph machine learning model parameters rather than embedding vectors or other information. This can improve the accuracy and efficiency of the model. Finally, it provides a robust defense against privacy attacks and reverse-engineering, as the transmitted information is just from the generated synthetic data and not the real data. §.§ Future Directions Combining privacy-preserving data generation and computation is a promising approach to protect individual privacy while maintaining model utility in machine learning. However, it also poses several challenges and possible future directions. §.§.§ Distribution of Privacy Budget When combining privacy-preserving data generation with computation, noises are added to both raw data and model parameters. However, it is still unclear how to distribute the privacy budget between data generation and computation in a way that optimizes the privacy-utility trade-off. In this approach, noises are added to the graph data during data generation and to the model parameters during data computation (i.e., federated learning), which results in an overall reduction in accuracy. However, while the privacy analysis for data generation is directly defined on the data space, the privacy analysis for federated learning requires transforming the change on parameter space back to data space. Such transformation requires estimating the sensitivity of a machine learning algorithm (i.e., how the change of a data point affects the learned parameters), which is only loosely bounded in current works <cit.>. A more precise analysis of privacy is required to better understand the impact of privacy budget allocation on the overall privacy-utility trade-off. §.§.§ Parameter Information Disentanglement Another future challenge when combining privacy-preserving data generation and computation is the disentanglement of task-relevant and task-irrelevant information. Currently, the noise added to the model parameters is isotropic, meaning that task-relevant and task-irrelevant information are equally protected. However, not all information is equally important for model utility. If we can identify which information has a significant influence on model performance, we can distribute more privacy budget to this information while allocating less privacy budget to task-irrelevant information. This can result in a better privacy-utility trade-off. Disentangling task-relevant and task-irrelevant information would require a more sophisticated analysis of model architecture and data characteristics to determine which features contribute most to model performance. § CONCLUSION In this paper, we review the research for privacy-preserving techniques for graph machine learning from the data to the computation, considering the situation where the data need to be shared or are banned from being transmitted. To be specific, for privacy-preserving graph data generation techniques, we analyze the forceful attackers first and then introduce how corresponding protection methods are proposed to defend attackers. For the privacy graph data computation, we circle around the federated learning setting and discuss how the general federated learning framework applied to graph data and what the potential challenges originated from non-IIDness, and how the nascent research works address them. In the end, we analyze the current limitation and propose several promising research directions. § ACKNOWLEDGEMENTS This work is supported by the National Science Foundation (1947203, 2117902, 2137468, 1947135, 2134079, and 1939725), the U.S. Department of Homeland Security (2017-ST-061-QA0001, 17STQAC00001-06-00, and 17STQAC00001-03-03), DARPA (HR001121C0165), NIFA (2020-67021-32799), and ARO (W911NF2110088). The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government. abbrv
http://arxiv.org/abs/2307.04719v1
20230710173139
On the curvature of the loss landscape
[ "Alison Pouplin", "Hrittik Roy", "Sidak Pal Singh", "Georgios Arvanitidis" ]
cs.LG
[ "cs.LG" ]
Quark/Gluon Discrimination and Top Tagging with Dual Attention Transformer Minxuan Hee1,addr1,addr2 Daohan Wange2,addr3 ========================================================================== One of the main challenges in modern deep learning is to understand why such over-parameterized models perform so well when trained on finite data. A way to analyze this generalization concept is through the properties of the associated loss landscape. In this work, we consider the loss landscape as an embedded Riemannian manifold and show that the differential geometric properties of the manifold can be used when analyzing the generalization abilities of a deep net. In particular, we focus on the scalar curvature, which can be computed analytically for our manifold, and show connections to several settings that potentially imply generalization. § FLATNESS AND GENERALIZATION IN MACHINE LEARNING The relationship between the generalization ability of a model and the flatness of its loss landscape has been a subject of interest in machine learning. Flatness refers to the shape of the hypersurface representing the loss function, parameterized by the parameters of the model. Flat minima are characterized by a wide and shallow basin. Generalization refers to the ability of a model to perform well on unseen data. A widely accepted hypothesis, proposed by various research groups hochreiter1997flat, hinton1993keeping, buntine1991bayesian several decades ago, suggests that flat minima are associated with better generalization compared to sharp minima. The basis of this hypothesis stems from the observation that when the minima of the optimization landscape are flatter, it enables the utilization of weights with lower precision. This, in turn, has the potential to improve the robustness of the model. 0.6 < g r a p h i c s > 0.4 width= figureOn the left, a surface represents a loss function f(, ) on its parameter space {, }. We can see two minima, a sharp minima and a flatter minima. A Brownian motion navigates the parameter space around those two minima, in blue for the sharp one, and red for the shallow one. On the right, the upper figure represents the Brownian motion navigating in the parameter space. The same is used for both minima. The lower figure represents the perturbations of the loss f in both the sharp (blue) and flat (red) minima. The loss is more robust to perturbation in the flatter minima. The notion of flatness has been challenged by dinh2017sharp, who argued that the different flatness measures proposed are not invariant under reparametrization of the parameter space and questioned the assumption that flatness directly causes generalization. Yet, numerous empirical and theoretical studies have presented compelling evidence that supports the relationship between flatness and enhanced generalization. This relationship has been observed in various contexts, by averaging weights izmailov2018averaging, studying inductive biases neyshabur2017geometry, imaizumi2022generalization, introducing different noise in gradient descent chaudhari2019entropy, pittorino2021entropic, adopting smaller batch sizes keskar2016large, and investigating ReLU Neural networks yi2019positively. The exact relationship between flatness and generalization is still an open problem in machine learning. In this preliminary work, we build upon the flatness hypothesis as a primary motivation to investigate the curvature of the loss landscape, approaching it from a differential geometric perspective. In this preliminary work, we analyze the loss landscape as a Riemannian manifold and derive its scalar curvature, an intrinsic Riemannian object that characterizes the local curvature of the manifold. We found that the scalar curvature, at minima, has a straightforward expression and can be related to the norm of the Hessian. While the norm of the Hessian may not always accurately measure flatness, it remains a valuable indicator for understanding optimization. Our findings demonstrate that the scalar curvature possesses all the benefits of the Hessian norm without its limitations. § GEOMETRY OF THE LOSS LANDSCAPE AND CURVATURE We are interested in finding the parameters of a model that minimizes the loss function denoted f. The loss function is a smooth function defined on the parameter space ⊂^q, where q is the number of parameters. In order to study the loss landscape of a model, we can look at the geometry of the graph of the loss function, which is a hypersurface embedded in ^q+1. Let f:Ω⊂^q→ be a smooth function. We call graph of a function the set: Γ_f={(, y) ∈Ω×| y = f()}. The graph Γ_f is an topological smooth manifold embedded in ^q+1, and it is isometric to the Riemannian manifold (, g) with ⊂^q and the induced metric g_ij = δ_ij + ∂_i f ∂_j f. the metric is obtained by pulling back, in one case, the loss function to the parameter space (∂_i f ∂_j f), and in another case, the parameter space to itself (δ_ij), lee2018introduction. Instead of working in the ambient space ^q+1, it is more convenient to study the intrinsic geometry of the loss function in the parameter space (, g). In particular, knowing the Riemannian metric, we can compute the associated geometric quantities of the loss landscape as the Christoffel symbols, the Riemannian curvature tensor, and the scalar curvature (See Appendix <ref> for an introduction of those quantities). In the following, we will denote ∇ the Euclidean gradient operator of the loss function f, and the Euclidean Hessian of f. Gradient(f): (∇ f)_i = _i= ∂_i f = f_,i Hessian(f): ()_ij = ∂_i ∂_j f = f_,ij §.§ Curvature in Riemannian geometry The Christoffel symbols define a corrective term used to compute covariant derivatives in a curved space. They can be derived from the Riemannian metric. The Christoffel symbols are given by: Γ^i_kl = β f_,i f_,kl, with β = (1+∇ f^2)^-1. See Appendix <ref>. Using those Christoffel symbols, we can directly compute the Riemannian curvature tensor. Using the Einstein summation convention, the Riemannian curvature tensor is an intrinsic mathematical object that characterizes the deviation of the curved manifold from the flat Euclidean manifold. The Riemannian curvature tensor is given by: ^i_jkm = β (f_,ikf_,jm - f_,jmf_,jk) - β^2 f_,if_,r (f_,rkf_,im - f_,rmf_,jk), with β = (1+∇ f^2)^-1. See Appendix <ref>. While those four-dimensional tensor gives us a complete picture of the curvature of a manifold, it can be difficult to interpret in practice. Instead, a scalar object, the scalar curvature, can be derived from the Riemannian curvature tensor. The scalar curvature quantifies locally how curved is the manifold. The scalar curvature is given by: = β(()^2 - (^2)) + 2 β^2 ( ∇ f^⊤ (^2 - () ) ∇ f), with β = (1+∇ f^2)^-1. See Appendix <ref>. This expression simplifies when the gradient is zero, which corresponds to a critical point of the loss function. In this case, the scalar curvature is given by: When an extremum is reached (∇ f=0), the scalar curvature becomes: (_min) = ()^2 - (^2) This is a direct result of Proposition <ref>, when ∇ f = 0. Note that we can also write, at the minimum, (_min) = _*^2 - _F^2, with ·_* the nuclear norm and ·_F the Frobenius norm. §.§ The scalar curvature as the deviation of the volume of geodesic balls This scalar curvature has a simple interpretation, as it corresponds to the difference in volume between a geodesic ball embedded in the Riemannian manifold and a ball of reference, the Euclidean ball. In hyperbolic spaces, the Riemannian ball will be bigger than the Euclidean one, and in spherical spaces, it will be smaller. If the curved space is flat, they are both equal in volume, and the scalar curvature is null. [Theorem 3.98]gallot1990riemannian The scalar curvature () at a point ∈ of the Riemannian manifold of dimension q is related to the asymptotic expansion of the volume of a ball on the manifold ℬ_g(r) compared to the volume of the ball in the Euclidean space ℬ_e(r), when the radius r tends to 0. (ℬ_g(r)) = (ℬ_e(r)) (1-()/6(q+2) r^2 + o(r^2)) § SCALAR CURVATURE AND OPTIMIZATION Corollary <ref> establishes a connection between the scalar curvature at each peak or valley in the loss landscape and the magnitude of the Hessian: () = *^2 - _F^2. Although the Hessian norm plays a key role in optimization tasks, we contend that it is not the most reliable gauge of flatness in all situations. On one hand, will delve into some issues that arise from only using the Hessian norm in Section <ref>. On the other hand, we will see how the scalar curvature reduces to the Hessian norm in some cases and supports theoretical findings in optimization in Section <ref>. §.§ Limitations of the trace of the Hessian as a measure of flatness The Hessian of the loss function, specifically its trace, has been shown to influence the convergence of optimization algorithms. For instance, wei2019noise revealed that stochastic gradient descent (SGD) reduces the trace of the loss function's Hessian in the context of over-parameterized networks. In a similar vein, orvieto2022anticorrelated discovered that SGD with anti-correlated perturbations enhances generalization due to the induced noise reducing the Hessian's trace. They also identified that the trace serves as an upper limit on the mean loss over a posterior distribution. Furthermore, within Graphical Neural Networks, ju2023generalization demonstrated that the trace of the Hessian can evaluate the model's resilience to noise. §.§.§ The saddle point problem Yet, relying solely on the trace of the Hessian may not provide an accurate measure of flatness. For instance, if half of the eigenvalues are positive and the other half are negative, with their sum equaling zero, the trace of the Hessian will also be zero. This is misleading as it suggests a flat region, when in reality it is a saddle point. [Curvature of a parameterized function] Let us imagine that the loss is represented by a function taking in inputs two weights u and v such that: f(u,v) = e^-c usin(u) sin(v), with c a positive constant. We notably have lim_u→∞ f(u,v) = 0, and so the surface tends to be flatter with u increasing. The trace of the Hessian of f and its scalar curvature can be computed analytically, and we have at a point =(u,v): ()() = e^-c u (-2 ucos(u) + (c^2-2)sin(u)) sin(v) () = (c^2-1)cos(2 u)-cos(2v) - c (c-2sin(2u))/e^2cu + cos(v)sin(u)^2 + (cos(u)-csin(u))sin(v)^2 §.§.§ The expected flatness over mini-batches [14]r0.4 < g r a p h i c s > The data points fit a sinus. The dataset is split into 7 batches of different colors. If the flatness is defined as (), the flatness over the entire dataset is equal to the expectation of the flatness of a batch. Thus, the curve is considered flat. Another challenge emerges when the dataset is divided into small batches. If we choose the Hessian's trace as the measure of flatness, the overall flatness of the entire dataset equals the average flatness over these batches (Equation <ref>). This could potentially induce the wrong conclusion depending on the method used to partition the dataset: In Figure <ref>, the dataset is split in such a way that the trace of the Hessian is null for each batch, which means that the curve is considered as flat over the entire dataset. The dataset, denoted 𝒟, is split into k mini-batches: {ℬ_1, ℬ_2, …, ℬ_k }. By linearity, the Hessian of the loss function over the entire dataset can be written as the mean of the Hessian of mini-batches i.e.: _𝒟 = 1/k∑_i _ℬ_i As a consequence, since the trace commutes with a summation, we have: (_𝒟) = (1/k∑_i _ℬ_i) = 1/k∑_i (_ℬ_i) = 𝔼[(_ℬ_i)]. The trace of the Hessian of the loss function over the entire dataset is the expectation of the Hessian over mini-batches: (_𝒟) = 𝔼[(_ℬ_i)] The corresponding result does not hold for the scalar curvature in general. The scalar curvature of the hessian of the full dataset is not equal to the expectation of the Scalar curvature over mini-batches. That is there exists a dataset, 𝒟, and mini-batches, {ℬ_1, ℬ_2, …, ℬ_k } such that: (_𝒟) ≠𝔼[(_ℬ_i)] See Appendix <ref>. §.§ The scalar curvature supports previous theoretical findings through the Hessian norm Although the two previous given examples suggest that in some cases, the trace of the Hessian is not a good definition of flatness, it is associated with the optimization process and the model's capacity to generalize in various ways. We will observe that under certain circumstances, the scalar curvature simplifies to the Hessian norm. §.§.§ Perturbations on the weights seong2018towards showed that the robustness of the loss function to inputs perturbations is related to the Hessian. We similarly show that the resilience of the loss function to weights perturbations is upper bounded by the norm of the Hessian. Additionally, a smaller scalar curvature implies stronger robustness. Let _min an extremum, ε, a small scalar (ε≪ 1) and a normalized vector (=1). The trace of the square of the Hessian is an upper bound to the difference of the loss functions when perturbed by the weights: f(_min + ε) - f(_min)_2^2 ≤1/4ε^4 (^2_min) This is obtained by applying the Taylor expansion, for a very small pertubation ε≪ 1. See Appendix <ref> for the full proof. [b]0.3 < g r a p h i c s > [b]0.3 < g r a p h i c s > Empirical demonstration of Proposition <ref>. We train two identical and differently initialized deep nets using the same optimizer (Adam). We then perturb pointwise the learned weights using Gaussian noise 𝒩(0,0.1^2). As expected the model on the left with scalar curvature ≈ 430 is more robust to perturbations compared to the right model with scalar curvature ≈ 610. Let us assume two minima _1 and _2, and we suppose that the loss function at _1 is flatter than the one at _2 in terms of scalar curvature so 0 ≤(_1) ≤(_2). Being at the minimum implies that (_1) = (_1)^2 - (^2_1) and (_2) = (_2)^2 - (^2_2) respectively. Then: 0 ≤(x_1) ≤(x_2) 0 ≤(_1)^2 - (^2_1) ≤(_2)^2 - (^2_2) ⇒(^2_1) ≤(^2_2). A flatter minima (_1)≤(_2) leads to more robustness of the loss function to weights perturbations: f(_1 + ε) - f(_1) _2^2 ≤f(_2 + ε) - f(_2) _2^2. In Figure <ref>, we consider ε∼𝒩(0,0.01) to be a small perturbation and we plotted the original loss function with the perturbed losses. We computed the ^2 at the minimum. When the scalar curvature is smaller, the variance across the perturbations at the minimum is smaller and the perturbations are more centered around the original loss function. §.§.§ Efficiency of escaping minima Stochastic gradient descent can be conceptualized as an Ornstein-Uhlenbeck process uhlenbeck1930theory, which is a continuous-time stochastic process that characterizes the behavior of a particle influenced by random fluctuations mandt2017stochastic. By considering the non-linear relationship between the weights and the covariance, the update rules in gradient descent resemble the optimization approach employed in the multivariate Ornstein-Uhlenbeck process. When approximating the covariance by the Hessian [Appendix A]jastrzebski2017three, the gradient descent can be seen as an Ornstein-Uhlenbeck process with: d_t = - _t dt + ^1/2d W_t The escaping efficiency measure is a metric used to evaluate the performance of optimization algorithms, including gradient descent, in escaping from local minima and finding the global minimum of the loss function, and is defined as [ f(_t)- f()]. zhu2018anisotropic used this definition and the expression of the gradient descent process (Equation <ref>) to approximate the escaping efficiency: [ f(_t)- f()] ≈t/2(^2). Similar to the example above, gradient descent will have more difficulties to escape from a minima with a small scalar curvature, and so it will converge more quickly to the flat minima. §.§.§ The scalar curvature is the squared norm of the Hessian in over-parameterized neural networks We note the Hessian of the loss of a model with q parameters, and the scalar curvature, obtained in Proposition <ref> and Corollary <ref>. When we reach a flat minimum, supposing the eigenvalues of are similar, for a high number of parameters q, we have: (_min) q →∞∼()^2 Let us suppose that, at a flat minimum, all the eigenvalues are similar: λ_1 = ⋯ =λ_q = λ≥ 0. Then we, have _*^2 = q^2 λ^2 and _F^2 = q λ^2. When the number of parameters increases, _F^2 = o(_*^2), and as a consequence _*^2 - _F^2 ∼_*^2. In this proposition, we assume that all the eigenvalues are similar. This strong assumption is supported by empirical results ghorbani2019investigation. The empirical results show that during the optimization process, the spectrum of the eigenvalues becomes entirely flat, especially when the neural network includes batch normalization. §.§ Reparametrization of the parameter space The main argument challenging the link between flatness and generalization is that the flatness definitions, so far, are not invariant under reparametrization. Reparametrization refers to a change in the parametrization of the model, which can be achieved by transforming the original parameters (θ) into a new set of parameters (η). Even if we assume that the models have the same performance: {f_θ, θ∈Θ⊂^q} = {f_φ(η), η∈φ^-1(Θ)}, this reparametrization alters the shape of the loss function landscape in ^q. This is the core of the problem: dinh2017sharp compared the flatness of f_θ and f_φ(η) with respect to the same ambient space ^q, while each measure should be defined, and compared, relative to their respective parameter space, and not to an arbitrary space of reference. The scalar curvature is not invariant under reparametrization of the parameter space, and it should not be. It is, however, an intrinsic quantity, which means that it does not depend on an ambient space. As a consequence, it is also equivariant under diffeomorphism, and notably, if and ' are two Riemannian manifolds related by an isometry Ψ:→', then () = (Ψ()), for all ∈. In the case of the scalar curvature, if we apply a diffeomorphism to the parameters space with φ:→', and f:'⊂^q→ the loss function, then: (f ∘φ) = (φ)^⊤(f) (φ) + _k(f) ^k(φ), with (φ), (f) the Jacobian of φ and f, and (f ∘φ), (f) and ^k(φ) the Hessian of f ∘φ and f. we note ^k(φ)_ij = ∂_i ∂_j φ^k the Hessian of the k-th component of φ. At the minimum of the loss function, (f)=0, with φ:→' a diffeomorphism, and '=φ(), the scalar curvatures on and ' is derived as: () = _f_*^2 - _f_F^2, (') = _φ_φ^⊤_f_*^2 - _φ_φ^⊤_f_F^2. § DISCUSSION Our research focused on analyzing the loss landscape as a Riemannian manifold and its connection to optimization generalization. We introduced a Riemannian metric on the parameter space and examined the scalar curvatures of the loss landscape. We found that the scalar curvature at minima is defined as the difference between the nuclear and Frobenius norm of the Hessian of the loss function. The flatness hypothesis forms the basis of our study, suggesting that flat minima lead to better generalization compared to sharp ones. The Hessian of the loss function is known to be crucial in understanding optimization. However, analyzing the spectrum of the Hessian, particularly in over-parameterized models, can be challenging. As a result, the research community has started relying on the norm of the Hessian. We show that, in certain scenarios, the Hessian norm doesn't effectively gauge flatness, whereas scalar curvature does. Despite this, the Hessian norm is still relevant to theoretical results in optimization, including the model's stability against perturbations and the algorithm's ability to converge. Similarly, these characteristics are also satisfied by the scalar curvature. In essence, the scalar curvature combines all the advantages of the Hessian norm while accurately describing the curvature of the parameter space. Future research could explore the curvature within stochastic optimization and investigate the scalar curvature as a random variable affected by the underlying data and batch distribution. It would also be interesting to understand how the scalar curvature relates to the stochastic process and whether it is connected to any implicit regularization in the model. Overall, our study contributes to the understanding of the loss function's parameter space as a Riemannian manifold and provides insights into the curvature properties that impact optimization and generalization. § APPENDIX § A PRIMER ON CURVATURES IN RIEMANNIAN GEOMETRY The key strength of the Riemannian geometry is to allow for calculations to be conducted independently of the choice of the coordinates. However, this flexibility results in more sophisticated computations. Specifically, as a vector moves across a manifold, its local coordinates also change. We must consider this shift, which is accomplished by including a correction factor, denoted as Γ, to the derivative of the vector. These factors Γ are known as Christoffel symbols. Let (,g) be a Riemannian manifold, and and two vector fields on . On the manifold, we need to add the Christoffel symbols Γ ^k_ij to account for the variation of the local basis represented by _i. The covariant derivative, or connection, is then defined by: ∇ _ =u^i ∂_i v^j _j + u^i v^j Γ^k_ji_k, with ∇_ = u^i ∂_i v^j _j the covariant derivative of along in the Euclidean plane. We can further compute the Christoffel symbols based on the Riemannian metric tensor g_ij: Γ^k_ij = 1/2 g^kl( ∂_i g_jl + ∂_j g_il - ∂_l g_ij), Now, we are interested in the concept of curvature. In Riemannian geometry, the curvature is defined as the deviation of the manifold from the Euclidean plane. The principal intrinsic tool that assess the curvature of a manifold is the Riemann curvature tensor, denoted . It characterises the change of the direction of a vector, when transported along an infinitesimally small closed loop. The Riemannian curvature tensor is defined the following way: Let (, g, ∇) be a Riemannian manifold. The Riemannian curvature tensor is defined by: (, ; ) = ∇_∇_ - ∇_∇_ - ∇_[,], for any vector fields , , ∈, with [·,·] the Lie bracket. At the local basis represented by _i, it can be expressed in terms of indices: ^l_ijk = ^l (_j, _k; _i), and in terms of the Christoffel symbols as: R_ijk^l = ∂_iljk - ∂_j lik + mjklim - mikljm The Riemann curvature tensor being a fourth order tensor, it can difficult to interpret. Instead, we can look at a scalar quantity called the scalar curvature or equivalently the scalar Ricci curvature, which is a contraction of the Riemann curvature tensor. Let (,g) be a Riemannian manifold. The scalar curvature is defined as: = g^ij^k_ikj, using the Einstein summation convention, with g^ij the inverse of the metric tensor g_ij, and ^k_ikj the components of the Riemannian curvature tensor. Just like the Riemannian curvature tensor and the Riemannian metric tensor, the scalar curvature is defined for every point on the manifold. The scalar curvature is null when the manifold is isometric to the Euclidean plane. It is be negative when the manifold is hyperbolic, or positive when the manifold is spherical. By definition, the scalar curvature is an intrinsic quantity, meaning that it does not depend on the ambient space. As a consequence, the scalar curvature is equivariant under diffeomorphisms. If we map a manifold (, g) to another manifold (', g', ∇') with a diffeomorphism φ: ' →, we can express the connection ∇' as the pullback of ∇: ∇' = dφ^*∇. The curvature of the pullback connection is the pullback of the curvature of the original connection. In other terms: dφ^*(∇) = (dφ^*∇) [Proposition 2.59]andrews2010ricci. In particular, if φ is an isometry: (∇) = (∇'). § THEORETICAL RESULTS §.§ Definition of the scalar curvature and other curvature measures The Christoffel symbols of the metric = + ∇_x f ∇_x f^⊤, in the parameter space Ω⊂^q with f the loss function is given by: Γ^i_kl = f_,i f_,kl/1+∇ f^2 We use below the Einstein sum notation, and in particular, for the scalar function f: ∂_i ∂_j f = f_,ij. The Christoffels symbols are obtained with the Riemannian metric: Γ_kl^i = 1/2 g^im( g_mk,l + g_ml,k - g_kl,m) Our metric is = + ∇ f ∇ f^⊤. Using the Sherman-Morrison formula: ^-1 = - ∇ f ∇ f^⊤/1+ ∇ f^2 g_ij = _ij = δ_ij + f_,i f_,j g_ij, k = f_,ik f_,j + f_,i f_,jk g_mk, l + g_ml, k - g_kl, m = 2 f_,kl f_,m g^im = ^-1_im = δ_im - f_,i f_,m/1+∇ f^2 Then: Γ^i_kl = (δ_im - f_,i f_,m/1+∇ f^2) f_,kl f_,m = f_,kl f_,i - f_,kl f_,i f_,m^2/1+∇ f^2 = f_,i f_,kl/1+∇ f^2. The coordinates of the Riemannian tensor curvature can be written with the Christoffel symbols: R^σ_μνκ = ∂Γ^σ_μκ/∂ x^ν - ∂Γ^σ_μν/∂ x^κ + Γ^σ_νλΓ^λ_μκ - Γ^σ_κλΓ^λ _μν The metric tensor = + ∇ f ∇ f^⊤ has for eigenvalues: {1,1,⋯, 1, 1+∇ f^2}. is a symmetric positive definite matrix, hence it is diagonalisable and all its eigenvectors w⃗ are orthogonal. Let's note = ∇ f. For the eigenvector : = (1+^2). For all the other eigenvectors, w⃗ =0 and w⃗=w⃗. The contraction of the Christoffel symbols for the metric = + ∇ f ∇ f^⊤: Γ_ki^i = f_,ik f_,i/1+∇ f^2. By definition, we have Γ^i_ki = ∂_k ln√(). By the previous lemma, we know that G = 1+∇ f^2 = 1+ f_,i^2. Γ^i_ki = ∂_k ln√() = ∂_k ln√( 1+ f_,i^2) = 1/2∂_k (1+ f_,i^2)/1+∇ f^2 = f_,ik f_,i/1+∇ f^2. Another method is to use the general expression of Γ^i_kl = f_,i f_,kl/1+∇ f^2, and the result is obtained for i=l. The Riemannian curvature tensor is given by: R^i_jkm = β ( f_,ik f_,jm - f_,jm f_,jk) - β^2 f_,i f_,r ( f_,rk f_,im - f_,rm f_,jk) The Riemannian curvature tensor is given by: R^i_jkm = ∂_k Γ^i_jm - ∂_m Γ^i_jk + Γ^i_rkΓ^r_jm - Γ^i_rmΓ^r_jk, and we have for Christoffel symbols: Γ^i_jm = β f_,i f_,jm. We note β = (1+∇ f^2)^-1. We have: ∂_k (β f_,i f_,jm) = ∂_k (β) f_,i f_,jm + β ( f_,ik f_,jm+ f_,i f_,jmk), and ∂_k (β) = - 2 β^2 f_ka f_a. ∂_k Γ^i_jm = -2β^2 f_,a f_,ak f_,i f_,jm + β ( f_,ik f_,jm+ f_,i f_,jmk) ∂_m Γ^i_jk = -2β^2 f_,a f_,ak f_,i f_,jm + β ( f_,im f_,jk+ f_,i f_,jkm) Γ^i_rkΓ^r_jm = β^2 f_,i f_,rk f_,r f_,jm Γ^i_rmΓ^r_jk = β^2 f_,i f_,rm f_,r f_,jk The Ricci scalar curvature is given by: R = β(()^2 - (^2)) + 2 β^2 ( ∇ f^⊤ (^2 - () ) ∇ f), with the Hessian of f. We use β^-1 = 1+∇ f^2, the Hessian of f, and ·_1,1 the matrix norm L_1,1. The Ricci tensor is given by: R_ab = R^i_aib = β ( f_,ii f_,ab- f_,bi f_,ai) - β^2 f_,i f_,r( f_,ir f_,ab- f_,br f_,ai) = β (()_ab-_ab^2) - β^2 ((∇ f^⊤∇ f)_ab-(∇ f)_a(∇ f)_b) The Ricci scalar is given by g^ab R_ab = δ_ab R_ab - β f_,a f_,b R_ab, and we notice: _aa = () f_,a_ab f_b = ∇ f^⊤∇ f (∇ f)_a f_,a = ∇ f^⊤∇ f R_ab = R_aa - β f_,a f_,b R_ab R_aa = β (()^2 - ()^2) - β^2 ((∇ f^⊤∇ f)() - ∇ f^⊤^2∇ f) β f_,a f_,b R_ab = β^2 (∇ f^⊤∇ f)() - ∇ f^⊤^2∇ f) - β^3 ((∇ f^⊤∇ f)^2 - (∇ f^⊤∇ f)^2) Then: R = β(()^2 - (^2)) - 2 β^2 ( ∇ f^⊤ (() - ^2) ∇ f) §.§ Perturbations on the weights Let _min an extremum, ε≪ 1 and a normalized vector. Then, minimising the trace of the square of the Hessian is equivalent to minimising the influence of the perturbations on the weights: f(_min + ε) - f(_min)_2^2 ≤1/4ε^4 (^2_min) The general Taylor expansion on f at _min + ε, with ε≪ 1 is: f(_min + ε) = f(_min) + ε^⊤ + ε^2/2^⊤ + o(ε^2 ^2). We now assume that is normalised such that =1. Note that, if is an eigenvector of then: ^⊤ = (). In general, each element of the vector is inferior to 1: _i^2 ≤ 1 and so, λ_i^2 _i^4 ≤λ_i^2. Furthermore, we have (_min) = 0. Thus: f(_min + ) - f(_min)_2^2 = ε^4/4(^⊤)^2 + o(ε^4) ≤ε^4/4(^2) + o(ε^4) §.§ Curvature over minibatches The Scalar curvature of the hessian of the full dataset is not equal to the expectation of the Scalar curvature over mini-batches. That is there exists a dataset, 𝒟, and mini-batches, {ℬ_1, ℬ_2, …, ℬ_k } such that: R(_𝒟) ≠𝔼[R(_ℬ_i)] Suppose we have a dataset 𝒟 and mini-batches {ℬ_1, ℬ_2} such that the Hessians over the minibatches are given by: [ -2 0; 4 1 ],[ 1 2 1; 2 -2 ] They both have equal trace, -1, and their ricci curvatures are -2 and -6 respectively. The hessian over the full dataset is given by: [ -1 2; 6 -1 ] This has the same trace as the minibatches but its ricci curvature is -22 not equal to the average of the ricci curatures over miniabtches.
http://arxiv.org/abs/2307.05340v1
20230711152746
Jet separated by a large rapidity gap at the Tevatron and the LHC
[ "C. Royon" ]
hep-ph
[ "hep-ph", "hep-ex" ]
=6.0in =8.25in =-0.3in =-0.20in #1 #1 #1 #1 #1 #1 and #1 Submitted to #1 Abstract Presented PRESENTED AT Jet separated by a large rapidity gap at the Tevatron and the LHC Christophe Royon Department of Physics and Astronomy, The University of Kansas, Lawrence, USA We compare the recent measurements of gap between jets at the Tevatron and the LHC with the Balitski Fadin Kuraev Lipatov framework. While a good agreement is obtained with Tevatron data, some discrepancies especially for the rapidity separation between jets are found that can be explained by an excess of initial state radiation in PYTHIA. DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 < g r a p h i c s > § INTRODUCTION AND BFKL FORMALISM In this paper, we discuss the description of recent measurements of gap between jets, the so-called Mueller-Tang process <cit.> at the Tevatron and the LHC using the Balitski Fadin Kuraev Lipatov (BFKL) <cit.> formalism. The schematic of jet gap jet events is shown in Fig. <ref>. Two jets separated by a difference in rapidity Δη are measured in the detector while a region in rapidity between (-1) and (1) is devoid of any particle. Experimentally, it is possible to veto on the presence of energy inside the calorimeter or on tracks from charged particles. A Pomeron is exchanged between the two jets so that there is no color flow. The natural dynamics to describe this kind of events is the BFKL one while the Dokshitzer Gribov Lipatov Altarelli Parisi (DGLAP) <cit.> formalism leads to a negligible cross section when gaps are large enough, typically more than 1.5 units of rapidity. The BFKL jet gap jet cross section reads d σ^pp→ XJJY/dx_1 dx_2 dp_T^2 = Sf_eff(x_1,p_T^2)f_eff(x_2,p_T^2)/16 π|A(Δη,p_T^2)|^2 where p_T is the jet transverse momentum (we assume that we have only two jets of same p_T that are produced at parton level), Δη the separation in rapidity between the two jets, x_1 and x_2 the energy fraction carried away by the jet, and S the survival probability (0.1 at Tevatron, 0.03 at LHC). The amplitude A reads A=16N_cπα_s^2/C_Fp_T^2∑_p=-∞^∞∫d γ/2 i π[p^2-(γ-1/2)^2]/[(γ-1/2)^2-(p-1/2)^2]exp{α_S N_C/πχ_effΔη}/[(γ-1/2)^2-(p+1/2)^2] where the sum stands over all conformal spins, α_S is constant at LL and running using the renormalization group equations at NLL. The BFKL effective kernel χ_eff is determined numerically, solving the implicit equation χ_eff=χ_NLL(γ,α̅ χ_eff) <cit.>. The S4 resummation scheme <cit.> is used to remove spurious singularities in the BFKL NLL kernel. This formalism was fully Implemented in the HERWIG <cit.> and PYTHIA <cit.> Monte Carlos, which is needed to take into account the jet size and the fact that the gap size is smaller than Δη between the jets by definition, the gap being defined at the edge of the jets <cit.>. § COMPARISON BETWEEN THE BFKL PREDICTION AND THE MEASUREMENT AT THE TEVATRON AND THE LHC The D0 and CDF Collaborations at the Tevatron measured the ratio of jet-gap-jet events with respect to dijets as a function of jet p_T and Δη for a fixed gap region between (-1) and (1) in rapidity <cit.>. D0 data are shown in Fig. <ref> and are in good agreement with the BFKL NLL calculations as implemented in HERWIG. The CMS Collaboration measured recently the same ratio at the LHC energy of 13 TeV for the same fixed gap region <cit.> and the results are shown in Fig. <ref>. The BFKL predictions are computed using three definitions for the gap implemented in PYTHIA, namely the theoretical one (pure BFKL calculation), the experimental one (no charged particle above 200 MeV in the gap region -1 < η < 1 as defined by the CMS Collaboration) and the strict gap one (no particle above 1 MeV in the gap region) <cit.> for different values of the survival probabilities S indicated in Fig.. There is a clear discrepancy between the CMS measurement and the expectations using the experimental gap definition whereas the strict gap one leads to a good description of data. It is thus interesting to understand what changes between the 2 TeV Tevatron and the 13 TeV LHC. It is also worth mentioining that the BFKL calculation also describes the measurements at the LHC at 7 TeV <cit.>. The distribution of charged particles with P_T>200 MeV (as defined by the CMS gap definition) from PYTHIA in the gap region -1<η <1 with initial state radiation (ISR) ON and OFF are shown respectively on the left and right plots of Fig. <ref> <cit.>. Particles emitted at large angle with p_T > 200 MeV from ISR have a large influence on the gap presence or not, and thus on the gap definition (experimental or strict). It means that the number of particles emitted in the gap region and predicted by PYTHIA is too large and would need further tuning using data. The second point to be understood is why the discrepancy between the BFKL calculation and data was mainly at 13 TeV and not present at lower center-of-mass energies. As we mentioned, the ratio between jet gap jets and inclusive jets is measured. The events predicted by the BFKL dynamics using the experimental and strict gap definitions are more quark gluon induced processes at Tevatron energies and gluon gluon ones at LHC energies. It is the same for inclusive jet production (except at large Δη where quark gluon processes dominate at the LHC). The number of emitted particles by QCD radiation is much larger for gluon gluon processes than for quark gluon processes, and obviously with ISR ON. The fact that the agreement between BFKL calculations as implemented in PYTHIA and CMS measurements is poor is thus due to two reasons, namely too much ISR in PYTHIA and the fact that gluon gluon processes dominate at the 13 TeV LHC <cit.>. The ISR emission from PYTHIA is too large at high angle and must be further tuned for jet gap jet events using for instance J/Ψ-gap-J/Ψ events which is a gluon gluon dominated process. The full NLO BFKL calculation of jet gap jet processes was recently performed in Ref. <cit.> including the NLO impact factors. The effects of NLO corrections were found to be quite small and do not change the conclusions concerning ISR radiation in PYTHIA. § FIRST OBSERVATION OF JET GAP JET EVENTS IN DIFFRACTION BY THE CMS COLLABORATION The first measurement of jet gap jet events in diffraction was also performed recently by the CMS and TOTEM collaborations <cit.>. These events are very clean since multi-parton interaction effects are suppressed by requesting at least one proton to be tagged <cit.> and could represent an ideal process to look for BFKL resummation. 11 events were observed with a gap between jets and at least one proton tagged with about 0.7 pb^-1, as shown in Fig. <ref> where we display the jet gap jet fraction as a function of Δη between the jets and p_T of the second leading jets for diffractive and inclusive events. It is clear that the fraction of jet gap jet events is enhanced in diffraction. This measurement would benefit from more luminosity to get more differential measurements. To conclude, we presented a measurement of the jet gap jet fraction at the Tevatron (1.96 TeV) and at the LHC (7 and 13 TeV). A good agreement between the BFKL calculation and the measurement is found at Tevatron energies, but an apparent disagreement appears at 13 TeV. BFKL predictions are in fact very sensitive to ISR as described in PYTHIA especially for gluon gluon interaction processes, that dominate at 13 TeV. Too much ISR at high angle is predicted by PYTHIA and further tuning using for instance J/Ψ-gap-J/Ψ events should be performed. 99 mt A. H. Mueller, W. K. Tang, Phys. Lett. B 284 (1992) 123. bfkl V. S. Fadin, E. A. Kuraev, L. N. Lipatov, Phys. Lett. B60 (1975) 50; L. N. Lipatov, Sov. J. Nucl. Phys. 23 (1976) 338; E. A. Kuraev, L. N. Lipatov and V. S. Fadin, Sov. Phys. JETP 45 (1977) 199;I. I. Balitsky, L. N. Lipatov, Sov.J.Nucl.Phys. 28 (1978) 822; V.S. Fadin and L.N. Lipatov, Phys. Lett. B429 (1998) 127; M. Ciafaloni, Phys. Lett. B429 (1998) 363; M. Ciafaloni and G. Camici, Phys. Lett. B430 (1998) 349. dglap G. Altarelli and G. Parisi, Nucl. Phys. B126 18C (1977) 298; V.N. Gribov and L.N. Lipatov, Sov. Journ. Nucl. Phys. (1972) 438 and 675; Yu.L.Dokshitzer, Sov. Phys. JETP. 46 (1977) 64 mtus O. Kepka, C. Marquet, C. Royon, Phys. Rev. D83 (2011) 034036; F. Chevallier, O. Kepka, C. Marquet, C. Royon, Phys. Rev. D79 (2009) 094019. salam G. P. Salam, JHEP 9807, 019 (1998). herwig G. Marchesini et al., Comp. Phys. Comm. 67, 465 (1992). pythia T. Sjostrand, S. Mrenna and P.Z. Skands, PYTHIA 6.4 Physics and Manual, JHEP 05 (2006) 026 d0jgj B. Abbott et al., Phys. Lett. B 440, 189 (1998); F. Abe et al., Phys. Rev. Lett. 80, 1156 (1998). totemcms TOTEM and CMS Collaborations, Phys. Rev. D 104, 032009 (2021). ourpap C. Baldenegro, P. Gonzalez Duran, M. Klasen, C. Royon, J. Salomon, JHEP 08 (2022) 250. totemcmsb CMS Collaboration, Eur. Phys. J. C 78 (2018) 242. jgjnlo D. Colferai, F. Deganutti, T. Raben, C. Royon, JHEP 06 (2023) 091. jgjpap C. Marquet, C. Royon, M. Trzebinski, R. Zlebcik, Phys.Rev. D87 (2013) 3, 034010.
http://arxiv.org/abs/2307.07494v1
20230714172722
TALL: Thumbnail Layout for Deepfake Video Detection
[ "Yuting Xu", "Jian Liang", "Gengyun Jia", "Ziming Yang", "Yanhao Zhang", "Ran He" ]
cs.CV
[ "cs.CV" ]
TALL: Thumbnail Layout for Deepfake Video DetectionThis work was finished when Yuting was a student in CRIPAC. Yuting Xu^1,3, Jian LiangProject leader ^2,4, Gengyun Jia^5, Ziming Yang^1,3, Yanhao Zhang^6, Ran He^2,4 ^1 Institute of Information Engineering, Chinese Academy of Sciences ^2 CRIPAC & MAIS, Institute of Automation, Chinese Academy of Sciences ^3 School of Cyber Security, UCAS ^4 School of Artificial Intelligence, UCAS ^5 Nanjing University of Posts and Telecommunications ^6 OPPO Research Institute [email protected], [email protected] received: ** 2023, accepted: * 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty The growing threats of deepfakes to society and cybersecurity have raised enormous public concerns, and increasing efforts have been devoted to this critical topic of deepfake video detection. Existing video methods achieve good performance but are computationally intensive. This paper introduces a simple yet effective strategy named Thumbnail Layout (TALL), which transforms a video clip into a pre-defined layout to realize the preservation of spatial and temporal dependencies. Specifically, consecutive frames are masked in a fixed position in each frame to improve generalization, then resized to sub-images and rearranged into a pre-defined layout as the thumbnail. TALL is model-agnostic and extremely simple by only modifying a few lines of code. Inspired by the success of vision transformers, we incorporate TALL into Swin Transformer, forming an efficient and effective method TALL-Swin. Extensive experiments on intra-dataset and cross-dataset validate the validity and superiority of TALL and SOTA TALL-Swin. TALL-Swin achieves 90.79% AUC on the challenging cross-dataset task, FaceForensics++ → Celeb-DF. The code is available at <https://github.com/rainy-xu/TALL4Deepfake>. § INTRODUCTION Deepfakes generate and manipulate facial appearances to deceive viewers through generation techniques <cit.>. With the remarkable success of generative adversarial networks <cit.>, deepfake products have become photo-realistic that humans can not distinguish. These deepfake products <cit.> may be misused for malicious purposes, leading to severe trust issues and security problems, such as financial fraud, identity theft, and celebrity impersonation <cit.>. The rapid development of social media exacerbates the abuse of deepfakes. Therefore, it is crucial to develop advanced detection methods to protect the data privacy of individual users. Most previous image-based methods <cit.> perform well on intra-dataset, but their generalizability needs to be improved. Recent research has focused on video-based methods to detect deepfake by modeling spatio-temporal dependencies. There are subtle spatio-temporal inconsistencies between frames since the deepfake algorithms are executed frame by frame. The core of video-level approaches for deepfake detection is capturing inconsistencies through temporal modeling. Existing deepfake video detection methods generally follow two directions. Some methods <cit.> use two-branch networks or modules to learn spatial and temporal information separately and then fuse them. However, these two-branch approaches may fragment spatiotemporal cooperation and lead to subtle artifacts being neglected. Others <cit.> directly use classic temporal models such as LSTM <cit.> and 3D-CNN <cit.>. These methods are computationally intensive. The current rise of transformers for vision task backbones has prompted the emergence of corresponding deepfake detection methods. They are accompanied by significant computational complexity that makes them challenging to deploy and use, despite breakthroughs in performance. To enjoy benefits from both image and video methods, we are curious to see whether it is possible to append information about the temporal dimension to the image dimension. This work develops a simple yet effective Thumbnail Layout (TALL) for deepfake detection by spatio-temporal modeling. TALL is computationally cheap and retains both temporal and spatial information. In detail, we use dense sampling to extract multiple clips in the video and then randomly select four consecutive frames in the video segment. Subsequently, a block is masked at a fixed position in each frame. Finally, the frames are resized as sub-image and sequentially rearranged into a pre-defined layout as a thumbnail, which has the same size as the clip frames. As shown in Fig. <ref>, TALL brings two advantages compared to the previous spatio-temporal modeling methods for deepfake detection: (1) TALL contains local and global contextual deepfake patterns. (2) TALL is a model-agnostic method for spatio-temporal modeling deepfake patterns at zero computation and zero parameters. Furthermore, we discover that the better temporal modeling capabilities backbone has, the better performance TALL achieves. Based on the proposed TALL, we complement a baseline for video deepfake detection based on Swin Transformer <cit.>, called TALL-Swin. We validate TALL-Swin on four popular benchmark datasets, including FaceForensics++ <cit.>, Celeb-DF <cit.>, DFDC <cit.> and DeeperForensics <cit.>. Our method gains a remarkable improvement over the state-of-the-art approaches. The main contributions of our paper are summarized as follows: * We provide a new perspective for an efficient strategy for video deepfake detection called Thumbnail Layout (TALL), which incorporates both spatial-temporal dependencies, and allows the model to capture spatial-temporal inconsistencies. * We propose a spatio-temporal modeling method called TALL-Swin, which efficiently captures the inconsistencies between deepfake video frames. * Extensive experiments demonstrate the validity of our proposed TALL and TALL-Swin. TALL-Swin outperforms previous methods in both intra-dataset and cross-dataset scenarios. § RELATED WORK §.§ Image-Level Deepfake Detection Typically, existing deepfake detection methods fall into two categories: image-level and video-level methods. The image-level methods <cit.> always exploit the artifacts of deepfake images in the spatial domain, such as discrepancies between local regions <cit.>, grid-like structure in frequency space <cit.>, and differences in global texture statistics <cit.> that provide specific clues to distinguish deepfakes from the real images. F3Net <cit.> and FDFL <cit.> utilize the same pipeline that utilizes frequency-aware features and RGB information to capture the traces in different input spaces separately. RFM <cit.> and Multi-att <cit.> propose an attention-guided data augmentation mechanism to guide detectors to discover undetectable deepfake clues. Face X-ray <cit.> and PCL <cit.> provide effective ways to outline the boundary of the forged face for detecting deepfakes. ICT <cit.> exploits an identity extraction module to detect identity inconsistency in the suspect image. Similarly, M2tr <cit.> and CORE <cit.> detect local inconsistencies within frames at different spatial levels. Generally, image-level methods suffer over-fitting issues when a specific technique manipulates the images, and they ignore temporal information. §.§ Video-Level Deepfake Detection To improve the generalization of deepfake detectors, many studies <cit.> generate diversity and generic deepfake data, while other studies <cit.> capture the temporal incoherence of fake videos as the generic clues. Some recent works <cit.> propose detecting temporal inconsistency using well-designed spatio-temporal neural networks, and others <cit.> attempt to add modules to image models that capture temporal information. STIL <cit.> formulates deepfake video detection as a spatial and temporal inconsistency learning process and integrates both spatial and temporal features in a unified 2D CNN framework. FTCN <cit.> detects temporal-related artifacts instead of spatial artifacts to promote generalization. LipForensics <cit.> is proposed to learn high-level semantic irregularity in mouth movement in the generated video. RealForensics <cit.> uses auxiliary data sets during training in exchange for generalization at the cost of higher computational demands. The video-based methods achieve strong generalization but suffer from large computational overhead. To reduce computational costs, we propose TALL which gathers consecutive video frames into thumbnails for learning spatio-temporal consistency. §.§ Deepfake Detection with Vision Transformer Recently, ViT <cit.> has achieved impressive performance in computer vision tasks. Many studies extend the ViT for deepfake detection <cit.>. These methods achieve better performance compared to CNN-based models <cit.>, but also sacrifice computational efficiency. Different from two-branch architectures <cit.>, it captures short-range and long-range temporal inconsistencies with a single-branch model. Due to the advent of the visual transformer (ViT) and the impressive ability to model long-range data, a few works <cit.> attempt to extend the transformer for deepfake detection. ICT <cit.> aims to detect identity consistency in deepfake video but may fail in detecting face reenactment and entire face synthesis results. DFLL <cit.> extract the UV texture map to help the transformer to detect deepfakes, which may disrupt the continuity between video frames. DFTD <cit.> leverages ViT to consider both global and local information but ignores the problem of excessive model arithmetic requirements. Although the transformer-based approaches <cit.> achieve promising performance, they are accompanied by significant computational complexity that makes them challenging to deploy and use, and the long-range dependencies may be insufficiently exploited in detection models. Swin Transformer <cit.> produces a hierarchical feature representation and has linear computational complexity concerning input image size, which is suitable as a general-purpose backbone for various vision tasks. In this paper, we cooperate with Swin Transformer to form our robust and efficient method TALL-Swin. § METHOD TALL is a deepfake video detection strategy that transforms a video clip into an all-in-one thumbnail without the extra computational overhead. In the following sections, we begin with the motivation of TALL for deepfake detection in Section <ref>. Then we present the technical details of the TALL in Section <ref>. Finally, a generalizable Swin-TALL baseline is introduced to explore subtle artifacts in Section <ref>. More details of the layout design are presented in the Appendix. §.§ Motivation While recent studies have attempted to address noticeable flaws through techniques like slight motion blurring and temporal consistency loss, subtle spatio-temporal artifacts still remain. These artifacts are important for detecting deepfakes, but they introduce two problems: 1) video-based models are less efficient, and 2) analyzing information over long distances may overlook local artifacts, which are critical for deepfake detection. To address these challenges, we propose the TALL strategy, which naturally incorporates temporal information into image-level tasks without disrupting spatial information. This approach enables the image-level model to detect deepfakes in videos. Furthermore, we discovered that TALL provides even greater performance gains when combined with a powerful spatial model, resulting in the TALL-Swin. In detail, TALL arranges consecutive frames in the temporal order in a compact 2×2 layout, in line with the calculation theory of convolution and shifted window. TALL contains both spatial and temporal information so that model can learn both intra-frame artifacts and inter-frame inconsistency and obtains comparable performance to video-based methods. Here we use the shifted window to explain TALL's mechanism. As illustrated in Fig. <ref> (a), the model computes self-attention while accounting for spatial dependencies across sub-images (represented by the solid red box). When the window spans multiple sub-images (represented by the red dash box), the model is able to capture temporal inconsistencies between frames. Moreover, TALL leverages both local and global contexts of deepfake patterns to ensure robust modeling capabilities for short and long-range spatial dependencies. Compared to previous methods, we anticipate that TALL strikes a balance between speed and accuracy, sacrificing a little spatial information while preserving performance. Based on the fact that attention-based models are better at handling contextual features and that the Swin-Transformer uses sliding windows to reduce computation and memory, we further complement TALL-Swin baseline for video deepfake detection. §.§ Thumbnail Layout (TALL) Given a video V∈ℝ^T×C×H×W, where T is the frame length of the video, C is the number of channels, and H×W is the resolution of the frames. Assuming each video contains N clips, we divide a video into N equal segments of length T/N and then sample consecutive t (set to 4 by default) frames from the segments at random locations to form one clip. Then, the thumbnail I is rearranged of sub-images (C×H/√(t)×W/√(t)) that are resized from the above t frames. To maximize the utility of TALL, we mask the organized N square masks of the thumbnail. It is based on two core designs: 1) The position of the masks is random between different sub-images, which retains the advantages of the Cutout <cit.> that encourages the network to focus more on complementary and less prominent features. 2) We fix the position of the mask within a clip to take advantage of the fact that most deepfake videos are frame-by-frame tampered with, thus forcing the model to detect inconsistencies between adjacent frames of the deepfake videos. We do not allow the mask to appear on the seams of the thumbnail but allow for partial mask inclusion in the thumbnail. The detailed procedure of TALL is summarized in Algorithm <ref>. §.§ TALL-Swin To balance efficiency and model performance for spatio-temporal feature learning and to leverage the benefits of attention-based models, we enhanced a baseline deepfake detection model called TALL-Swin by incorporating the Swin Transformer <cit.>. Given the characteristics of TALL, we slightly modified the window size of Swin-B in TALL-Swin. We first enlarge the window size of the first three stages of the model so that the interaction between frames in the thumbnail becomes more frequent, forcing the model to learn more detailed spatio-temporal dependencies. Next, we set the window size of the last stage to be the same as the feature map size, enabling the window to perform global attention computations while TALL-Swin captures global spatial-temporal dependencies. As a result, the size of the last layer of the feature map became smaller, reducing the window size without introducing any additional computational overhead. Consequently, the window sizes for the four stages of TALL-Swin are [14,14,14,7]. Note that the patch merging process makes TALL-Swin captures a more comprehensive range of dependencies through hierarchical representations, as shown in Fig. <ref> (b). Given a video of length T, each frame contains N patches, and the window contains P patches. To demonstrate the superiority of TALL-Swin in terms of computational consumption, we show below the computational complexity of the image-level transformer and video-level transformer, including ViT <cit.>, Swin <cit.>, ViViT <cit.> (model1), and TALL-Swin respectively: [ Ω_ViT = 4TNC^2 + 2TN^2C,; [1mm] Ω_Swin = 4TNC^2 + 2TPNC,; [1mm] Ω_ViViT = 4TNC + 2T^2N^2C,; [1mm] Ω_TALL-Swin = TNC^2 + 1/2TPNC. ] TALL-Swin has the lowest computational complexity compared to image and video-level transformer methods. Subsequent experiments will demonstrate that TALL-Swin maintains performance, albeit at the sacrifice of some spatial information. The cross-entropy loss is employed to optimize the TALL-Swin, which is defined as: ℒ_CE = - 1/n∑_i=1^n( y_i logℱ(x_i) + (1-y_i) log(1- ℱ(x_i)) ), where x_i indicates input clip, n is the length of clip, y_i denotes the label of clip, ℱ is TALL-Swin. § EXPERIMENTS §.§ Setup Datasets. Following previous works <cit.>, we evaluate the TALL and TALL-Swin on four widely used datasets. FaceForensics++ <cit.> is a most-used benchmark on intra-dataset deepfake detection, consisting of 1,000 real videos and 4,000 fake videos in four different manipulations: DeepFake <cit.>, FaceSwap <cit.>, Face2Face <cit.>, and NeuralTextures <cit.>. Besides, FaceForensics++ contains multiple video qualities, high quality (HQ), low quality (LQ) and RAW. Celeb-DF (CDF) <cit.> is a popular benchmark on cross-dataset, which contains 5,693 deepfake videos generated from celebrities. The improved compositing process was used to improve the various visual artifacts presented in the video. Celeb-DF is also suitable for deepfake detection tasks with a reference set. DFDC <cit.> is a large-scale benchmark developed for Deepfake Detection Challenge. This dataset includes 124k videos from 3,426 paid actors. The existing deepfake detection methods do perform not very well on DFDC due to their sophisticated deepfake techniques. DeeperForensics (DFo) <cit.> includes 60,000 videos with 17.6 million frames for deepfake detection, whose videos vary in identity, pose, expression, emotion, lighting conditions, and blend shape with high quality. Implementation Details. We use MTCNN <cit.> to detect face for each frame in the deepfake videos, only extract the maximum area bounding box and add 30% face crop size from each side as in LipForensics <cit.>. The ImageNet-21K <cit.> pretrained Swin-B model is used as our backbone. Excluding ablation experiments, we sample 8 clips using dense sampling, each clip contains 4 frames. The size of the thumbnail is 224×224. Following Swin Transformer <cit.>, Adam <cit.> optimization is used with a learning rate of 1.5e-5 and batch size of 4, using a cosine decay learning rate scheduler and 10 epochs of linear warm-up. We adopt Acc (accuracy) and AUC (Area Under Receiver Operating Characteristic Curve) as the evaluation metrics for extensive experiments. To ensure a fair comparison, we calculate video-level predictions for the image-based method and average the predictions across the entire video (following previous works <cit.>). Note that results are directly cited from published papers if we follow the same setting. §.§ Scaling over Backbones To verify our assumption, we adopt several image-level backbones commonly used for deepfake detection for comparison with the video-level backbones. As shown in Table <ref> above the double horizontal line, we first compare the accuracy and complexity of the CNN-based video and image backbones. Although I3D <cit.> and R3D <cit.> achieve better performance than vanilla ResNet50 <cit.> and EfficientNet <cit.>, the computation costs are huge, such as R3D-50 with 296G FLOPs. For ResNet and EfficientNet who added TALL, ResNet achieves better AUC both on CDF (76.38 VS 80.93) and DFDC (64.01 VS 65.54) datasets, and EfficientNet achieves 5.18% better AUC on CDF. The second section contains the video and image transformers. Compared to video transformers, the image-based ViT and Swin fail to achieve better performance due to the lack of temporal modeling. For example, ViViT achieves 86.96% AUC on CDF, which is 3.6% higher than Swin although ViViT with 13× more computation. By way of contrast, ViT+TALL achieves 86.58% AUC on CDF with 55.4G FLOPs, which is comparable to AUC with ViViT but with low computation. Accordingly, Swin's performance was significantly improved with the addition of TALL without computation increment. On the other hand, TALL boosts higher performance on models with learned long-range dependencies. , ResNet+TALL (+4.5% on CDF and +1.5% on DFDC) Swin+TALL (+7.6% on CDF and +3.6% on DFDC). These two section results demonstrate that TALL provides both spatial and temporal information and enables the model to learn spatial and temporal inconsistencies for video deepfake detection. §.§ Comparison with State-of-the-art Methods Intra-dataset evaluations. Following ISTVT <cit.>, we show the results of the FF++ dataset under both Low Quality (LQ) and High Quality (HQ) videos, and report comparisons against several advanced methods in Table <ref>. We can observe that advanced video-based transformers have better results than CNN-based methods. Compared to video-based transformer methods, TALL-Swin has comparable performance and lower consumption to the previous video transformer method with HQ settings. However, TALL-Swin gets unsatisfactory results with the LQ setting. The LQ setting is obtained by severely compressing the videos. So the reason for the result may be that TALL scales the frame to a smaller size, causing more spatial information to be lost in the frame. We will investigate the possibility of other designs to further improve performance in the LQ setting. Generalization to unseen datasets In addition to the intra-dataset comparisons, we also investigate the generalization ability of our method. Adhering to the deepfake video detection cross-dataset protocol <cit.>, we train a model on FF++ (HQ) then test on Celeb-DF (CDF), DFDC, FaceShifter (FSh), and DeeperForensics (DFo) datasets. As shown in Table <ref>: (1) Video-based methods generally have better results than image-based methods, which shows that temporal information is helpful for the deepfake video detection task. For example, Lip outperforms Face X-ray's AUC by a wide margin. In addition, most transformer-based models have higher performance than CNN-based models. For the transformer-based models, both achieved an average AUC of 88%, while the best CNN-based video-level models only achieved 87%. (2) TALL-Swin achieves state-of-the-art results on Celeb-DF, DFDC, and DeeperForensics datasets, and also beats its competitors on Celeb-DF dataset by a large margin (3.8%). The results demonstrate that TALL-Swin performs well when encountering unseen datasets with better generalization ability than previous video transformer methods. Analysis of saliency map visualization. We adopt Grad-CAM <cit.> to visualize where the TALL-Swin is paying its attention to the deepfake faces. In Fig. <ref>, we give the results on intra-dataset and cross-dataset scenarios. All models are trained on FF++ (HQ). It can be observed in the first four rows of Fig. <ref> that TALL-Swin captures method-specific artifacts. Note that the DF transfers the face region from a source video to a target, and the NT only modifies the facial expressions corresponding to the mouth region. TALL-Swin corresponds to focus on the face region and the mouth region. Furthermore, our model traces the more generalized artifacts that are independent of manipulation methods, , blending boundaries (CDF), and abnormal motions in the clip (DFDC, Fsh, Dfo). Robustness to unseen perturbations. Deepfake detectors must be robust to common perturbations, given that video propagation on social media causes video compression, noise addition, etc. We also study the performance of robustness to unseen perturbations. Following RealForensics <cit.>, the experiment applies seven unseen perturbations to fake videos at five intensity levels. In Fig. <ref>, we show results of increasing the severity of each corruption. We can observe that other methods degrade dramatically as the perturbations become more severe. TALL-Swin still has a high performance. However, TALL-Swin degrades when the Gaussian noise reaches level five. Table <ref> presents the average AUC across all intensity levels for corruption types. We observe that our method is significantly more robust to most perturbations than other methods. The good robustness may be from both the design of TALL and the proposed data augmentation. The main reason may be the consecutive multi-frame input. We empirically consider that the key to deepfake detection is local inconsistency, the continuous frame design has less redundant information, ensuring that the model finds locally important clues. We will explore relevant experiments in subsequent research. §.§ Ablation Study We perform the ablation study to analyze the effects of each component and hyper-parameter in TALL-Swin. All experiments are trained on FF++ (HQ) and tested on the CDF and DFDC datasets. Study on different layouts. We use vanilla Swin-B as the baseline for this study to compare the effect of different thumbnail layout schemes on the model's generalization ability. Changing frames to thumbnails involves scaling, so we also investigate the impact of resizing and random cropping pre-processing on model performance. We set up four variants: resizing pre-process with 4×4 layout, 3×3 layout and 2×2 layout; random cropping pre-process with 2×2 layout. As shown in Table <ref>, the model performance degrades sharply when using 4×4 layout. This may be due to the small size of each sub-image that the spatial information is not captured well by the model. The result of 3×3 layout also slightly decreases. 2×2 layout with resizing pre-processing beats 2×2 layout with random crop. We also found that TALL-Swin achieves the best performance and the AUC score increases 3.2% compared to the baseline, suggesting that thumbnails in a 2×2 layout are more helpful to TALL-Swin than normal frames. Study on window size. We study the effect of window size on model performance and computational cost. The results are shown in Table <ref>. Our window expansion for the first three phases will increase the model performance by 1.74% AUC. The results in the second and third rows show that the first three stages of the window getting the largest would not give a boost to the model. Our analysis of a too-large window may lead to a weakening of the model's ability to learn local information in the sub-image. Effectiveness of TALL's augmentation strategy. In this work, TALL-Swin is trained on the FF++ (HQ) dataset without any data enhancement as the baseline except for Multi-scale Crop and Random Horizontal Flip. To validate the effectiveness of the augmentation strategy, we compare our default baseline with different data augmentation strategies: 1) The Cutout <cit.> on one sub-image; 2) The Cutout on four sub-images. 3) The combination of Mixup <cit.> and Cutmix <cit.> on four sub-images, as shown in Table <ref>. The performance of a random Cutout <cit.> on four sub-images is better than on one sub-image. Besides, the augmentation strategy leads to better performance than the well-known Cutout (1.46%). This supports our hypothesis that strategy encourages models to learn subtle temporal-spatial variations and improves model generalization ability. Further, our augmentation strategy exceeds 1.02% than the combination of Mixup and Cutmix, demonstrating the augmentation's effectiveness in TALL for video detection. Study on absence and order of thumbnails. In this case, we study the impact of missing the last sub-image and the last two sub-images on the model's performance. Besides, we set the order of the different thumbnails to evaluate the TALL-Swin. We consider three orders: forward, reverse, and random. The first two rows of Table <ref> show that all four sub-images contribute to the model performance. Further, the result surprised us that sub-images are arranged in forwarding order beats other order settings. We expected the same performance from forward and reverse orders. We found that a similar phenomenon had occurred in S3D <cit.>. The Deepfake detection task also requires fine-grained differences of visually similar artifacts, which may be the reason for this phenomenon. § CONCLUSIONS This paper presents a novel perspective on detecting deepfake videos using TALL. TALL is both simple and effective, enabling joint spatio-temporal modeling without any additional costs. TALL representation reveals normal deepfake patterns with local-global contextual features. We further propose a new baseline for deepfake video detection called TALL-Swin, which efficiently captures the inconsistencies between deepfake video frames. Extensive experiments demonstrate that TALL-Swin achieves promising results for various unseen deepfake types and strong robustness to a wide range of common corruptions. Limitations. Currently, we are limited to verifying TALL on backbones due to the lack of open-source counterparts. It is interesting to explore the enhancement of TALL applications on advanced image-level methods. § ACKNOWLEDGMENT The authors wish to thank Huaibo Huang and Lijun Sheng in no particular order, for insightful discussions. ieee_fullname § IMPLEMENTATION DETAILS We employ AdamW <cit.> optimizer for 60 epochs with a 10-epoch linear warm-up. An initial warm-up learning rate of 1.5e-8 and weight decay of 1e-5 are used. All the experiments are conducted on 1 Nvidia TITAN RTX 24 GB GPUs and Intel(R) Xeon(R) Gold 6230 CPU@ 2.10GHz. Our method is implemented based on the PyTorch v1.7.0 and torchvision 0.8.2, build upon the open-source Swin Transformer <cit.> codebase. § ADDITIONAL EXPERIMENTS §.§ Effects of different layouts. We train a TALL-Swin model on FF++(HQ) for each layout illustrated in figure <ref>, to analyze in which layout of the thumbnails the model learns the strongest generalization of the spatial-temporal dependence of the deepfake patterns. As shown in table <ref>, the model with a compact layout like figure <ref> (d) has good generalization ability on the unseen datasets. A compact layout like figure <ref> (d) may help the model to learn the temporal dependence across frames because such a form provides the shortest distance between any two images. §.§ Effects of Sub-image's size. We eliminate the scaling operation for sub-images to allow for more flexible layout settings. However, we've observed that when the number of sub-images grows at their original size, the performance improvements are only slight. Additionally, the computational complexity increases dramatically with the number of frames (more than 5.3 times the TALL setting), as demonstrated in table <ref>. To strike a balance between performance and computational complexity, we reduce the resolution of sub-images in TALL.
http://arxiv.org/abs/2307.04226v2
20230709163747
Seismic Data Interpolation based on Denoising Diffusion Implicit Models with Resampling
[ "Xiaoli Wei", "Chunxia Zhang", "Hongtao Wang", "Chengli Tan", "Deng Xiong", "Baisong Jiang", "Jiangshe Zhang", "Sang-Woon Kim" ]
physics.geo-ph
[ "physics.geo-ph", "stat.ML" ]
Journal of Class Files, Vol. 14, No. 8, August 2015 Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Seismic Data Interpolation based on Denoising Diffusion Implicit Models with Resampling Xiaoli Wei, Chunxia Zhang, Member, IEEE, Hongtao Wang, Chengli Tan, Deng Xiong, Baisong Jiang, Jiangshe Zhang, Sang-Woon Kim, Life Senior Member, IEEE Corresponding author: Chunxia Zhang. E-mail: [email protected]. Xiaoli Wei, Chunxia Zhang, Hongtao Wang, Chengli Tan, Baisong Jiang, Jiangshe Zhang are with the School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China. Deng Xiong is with the Geophysical Technology Research and Development Center, BGP, Zhuozhou, Hebei, 072751, China Sang-Woon Kim is with the Department of Computer Engineering, Myongji University, Yongin, 17058, South Korea. This research was supported by the National Key Research and Development Program of China (No. 2018AAA0102201) and the National Natural Science Foundation of China (No. 61976174). This work has been submitted to the IEEE TGRS for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. August 12, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The incompleteness of the seismic data caused by missing traces along the spatial extension is a common issue in seismic acquisition due to the existence of obstacles and economic constraints, which severely impairs the imaging quality of subsurface geological structures. Recently, deep learning-based seismic interpolation methods have attained promising progress, while achieving stable training of generative adversarial networks is not easy, and performance degradation is usually notable if the missing patterns in the testing and training do not match. In this paper, we propose a novel seismic denoising diffusion implicit model with resampling. The model training is established on the denoising diffusion probabilistic model, where U-Net is equipped with the multi-head self-attention to match the noise in each step. The cosine noise schedule, serving as the global noise configuration, promotes the high utilization of known trace information by accelerating the passage of the excessive noise stages. The model inference utilizes the denoising diffusion implicit model, conditioning on the known traces, to enable high-quality interpolation with fewer diffusion steps. To enhance the coherency between the known traces and the missing traces within each reverse step, the inference process integrates a resampling strategy to achieve an information recap on the former interpolated traces. Extensive experiments conducted on synthetic and field seismic data validate the superiority of our model and its robustness to various missing patterns. In addition, uncertainty quantification and ablation studies are also investigated. Seismic data interpolation, denoising diffusion model, multi-head self-attention, resampling § INTRODUCTION Seismic exploration interprets geological information and infers subsurface properties by analyzing the pre-stack data collected by geophones planted in the field. Acquisition of high-quality seismic data is a key factor for high-quality seismic data processing and interpretation. However, the collected seismic data is usually severely degraded due to the complex natural environment or limited budget. The degradation of data integrity is typically observed in the form of random or consecutive missing seismic traces, resulting in undersampled or aliased seismic data <cit.>. Seismic data interpolation has been extensively investigated over the past decades. Initially developed traditional methods often rely on the assumption of global or local linear events to convert the problem into an autoregressive framework <cit.>. Especially, prediction-filter-based methods, combined with the t-x and f-x regularization <cit.>, <cit.>, occupy the research mainstream in this direction. Besides, wave-equation-based methods are able to extrapolate and interpolate wave field <cit.>, whereas they require additional information, e.g., wave velocity. Two successful categories of model-driven methods involve different constraints to recover seismic data. The first category is the sparsity-based method, which introduces various sparse transforms and sampling functions to interpolate missing data <cit.>. Among these methods, those derived from the projection onto convex sets <cit.> have received more attention due to their relatively high performance. The second category applies the low-rank constraint model to recover data, e.g., using singular value decomposition on block Hankel matrix <cit.>. While the traditional methods and model-driven methods are capable of achieving interpolation from a theoretical perspective, issues such as manual parameter selection and enormous computation cost cannot be ignored, particularly for massive and high-dimensional field seismic data with advancements in collection technology and efficiency. With the rapid advancement of deep learning-based generative models, the research focus for seismic data interpolation has shifted towards data-driven methods, which mainly include two categories, i.e., generative neural network and generative adversarial network (GAN). The preliminary methods in the first category of data-driven models contain the convolutional autoencoder (CAE) <cit.>, <cit.>, U-Net <cit.>, <cit.>, and residual network (ResNets) <cit.>, etc. Liu et al. <cit.> introduce the invertible discrete wavelet transform for replacing the pooling operations in the traditional U-Net model, thereby avoiding the loss of detailed features caused by the downsampling scheme. Some researchers have worked on improving the long-range feature correlation via different attention modules <cit.>, <cit.>, which are critical to maintain the global content consistency, especially under the circumstance of consecutively missing seismic traces <cit.>. Furthermore, regularization terms are important in finding the optimal interpolation function, e.g., spectrum suppression <cit.> and regeneration constraint <cit.>. Some studies also focus on improving the seismic feature extraction ability of neural networks, including the adoption of UNet++ with a nested architecture <cit.> and dynamically updating the valid convolution region <cit.>. However, a standalone neural network is usually insufficient to capture the vast range of dynamic energy in seismic data. To resolve this issue, the coarse-refine network <cit.> and the multi-stage active learning method <cit.> have been proposed, which exploit the strengths of every sub-network to make the interpolation process more efficient and well-performed. The second category of data-driven models, GAN-based methods, has achieved impressive results in seismic data interpolation. Kaur et al. <cit.> adopt the framework of CycleGAN to perform self-learning on the seismic features. The conditional generative adversarial network (CGAN) is introduced to interpolate the seismic data with consecutively missing traces <cit.>. Based on CGAN, the dual-branch interpolation method combining the time and frequency domains improves the smoothness and quality of the reconstructed seismic data <cit.>. The large obstacle is a common trouble in seismic exploration, which leads to big gaps in the collected seismic data and impairs the further data processing. The promising results of conditional Wasserstein generative adversarial networks with gradient penalty (WGAN-GP) have revealed the seismic feature generation capability <cit.>, whose gradient penalty enhances the fidelity of reconstructed signals at large intervals by enforcing the Lipschitz constraint. The coarse-to-fine learning strategy drived by the joint of different loss strengthens the connection between different stages and enables relativistic average least-square generative adversarial network (RaLSGAN) to produce more accurate and realistic signal details <cit.>. Although the deep learning-based seismic data interpolation method has attracted considerable attention, the instability of GAN training and the complexity of field data still limit its further development. First, while the generator can be implemented with a state-of-the-art generative architecture toward seismic data reconstruction, the demand for training the discriminator cannot be avoided for a GAN-based model, and the optimal solution often lies in a saddle point instead of a local minimum <cit.>. Stable adversarial training requires good initialization and hyperparameter settings. Second, field seismic data usually consist of multiple missing forms due to the influence of ground obstacles and geophone layout conditions, etc. The aforementioned data-driven methods either serve to a specific missing form of seismic data or need retraining when interpolating seismic data with different missing ratios or forms. Since their training is based on a certain mask distribution, the performance of the model may degrade to varying degrees or even fail to achieve the desired effect when transferring to a new scenario. In this paper, we propose a new seismic denoising diffusion implicit model with resampling (SeisDDIMR) to address the above issues, showing that it only needs to be trained once to complete the reconstruction tasks of different missing rates or missing forms, and it exhibits superior interpolation effects compared to the existing deep learning methods. This denoising diffusion model-based approach retains the strong power of generative neural networks since the backbone can be inherited from state-of-the-art generative architectures. The main contributions of this paper are summarized below: * Our model's entire training framework is built on denoising diffusion probabilistic models (DDPM) <cit.>, which include two parameterized Markov chains, i.e., a forward diffusion process and a reverse process. The forward diffusion process progressively adds pre-designed Gaussian noise to the initial full seismic data. The reverse process uses variational inference to estimate the noise after a finite time of the forward process under the fixed noise addition mode, and thereby the parameterization estimation of the neural network is completed. * Our noise matching network follows the U-Net structure equipped with multi-head self-attention (MHSA), which can capture stronger long-range correlations of seismic data. * The inference process of our model deriving from condition interpolation is accelerated by using denoising diffusion implicit models (DDIM) <cit.>, and we adopt the strategy of resample iterations <cit.> to enhance the consistency of the interpolation content before and after the reverse diffusion step. To make more effective adjustments conditioned on the known seismic traces, we introduce a cosine noise schedule that enables the inverse process to generate meaningful reconstruction signals in the early stages instead of high-noise results under a linear noise schedule. This contributes greatly to the interpolation quality. * Existing deep learning methods are often limited by the missing forms constructed during training, consequently lacking robustness to effectively interpolate seismic data in cases where the missing patterns do not match or complex missing forms coexist. Our proposed method breaks through this issue and brings greater flexibility to the application of deep learning interpolation methods in field scenarios. The remainder of this paper is organized as follows. In Section <ref>, we introduce our SeisDDIMR method including the training, inference, and network architecture. In Section <ref>, experiments with various missing interpolation are performed for both synthetic and field seismic data. The effectiveness of our method is demonstrated by comparing it with popular methods. Furthermore, to indicate the stronger advantages of our model in practical application scenarios, we conduct uncertainty quantification and model robustness validation. Section <ref> presents some ablation studies. Finally, we make conclusions and discussions in Section <ref>. § METHODOLOGY Let x∈ℛ^n_r × n_t as the original complete seismic data, with n_r and n_t as the number of traces and time samples. The degradation process of observed seismic data can be formally expressed as y=m⊙x, such that m[ i,: ]= J, i is valid 0, else where ⊙ represents the element-wise multiplication, J is the all-ones matrix, and 0 denotes the zero matrix. The notation m[ i,: ] indicates the missing mask of ith trace data. Seismic data interpolation aims to learn a function mapping observed seismic data y back to complete data, which is usually implemented by a neural network parameterized by θ. Different from a single neural network model, the diffusion model-based approach incorporates multiple parameterization processes to achieve stepwise approximation. The proposed SeisDDIMR model consists of two main processes, i.e., the training process for estimating the parameters of seismic DDPM and the inference process for interpolating missing seismic data. In Section <ref>, we introduce the key principles of DDPM combined with the background of seismic data interpolation. The following Sections, <ref> and <ref>, provide descriptions of the noise matching network and its corresponding noise schedule. Finally, the inference method, together with its theoretical background, is presented in Section <ref>. §.§ Seismic Denoising Diffusion Probabilistic Model Given the complete seismic data samples x_0 ∼ q(x_0), DDPM relies on the generative Markov chain process and the noise matching network to gradually learn the target distribution p_θ(x_0). The forward diffusion process is a deterministic Markov chain starting from the initial input x_0 and using a pre-specified noise schedule to gradually add Gaussian noise to perturb the data distribution. Given the latent variables x_1, …, x_T derived from the same sample space with x_0, the diffusion process is defined as q(x_1: T|x_0):=∏_t=1^T q(x_t |x_t-1), where q(x_t |x_t-1):=𝒩(x_t ; √(1-β_t)x_t-1, β_t 𝐈). Here, β_t∈(0,1) is a pre-designed increasing variance schedule of Gaussian noise. The closed form of sampling x_t given by Ho et al. <cit.> reveals the progressive changes during the middle time of the forward process. Letting α_t:=1-β_t and α̅_t:=∏_s=1^t α_s, it can be denoted as q(x_t |x_0)=𝒩(x_t ; √(α̅_t)x_0,(1-α̅_t) 𝐈). As t continues to increase, the final data distribution converges to a given prior distribution, i.e., a standard Gaussian for x_0. Correspondingly, the reverse process will gradually denoise for each step of the forward process starting from p(x_T)=𝒩(x_T ; 0, 𝐈) under the Markov chain transition p_θ(x_0: T):=p(x_T) ∏_t=1^T p_θ(x_t-1|x_t), where p_θ(x_t-1|x_t):=𝒩(x_t-1 ; μ_θ(x_t, t), Σ_θ(x_t, t)) and the network parameter θ is shared across different reverse stages. This optimization problem of fitting the data distribution q(x_0) can be converted into the minimization of a variational lower bound (VLB) for the negative log likelihood by introducing Jensen’s inequality L_vlb:=𝔼_q(x_0: T)[logq(x_1: T|x_0)/p_θ(x_0: T)] ≥-𝔼_q(x_0)log p_θ(x_0). VLB is decomposed into the following KL-divergence form between two Gaussian distributions by including the Markov property in the denoising diffusion model and the definition form of the forwards process L_vlb = 𝔼_q[D_KL(q(x_T |x_0) p(x_T))] -𝔼_q[log p_θ(x_0 |x_1)] +𝔼_q[∑_t=2^T D_KL(q(x_t-1|x_t, x_0) p_θ(x_t-1|x_t))]. According to Ho et al. <cit.>, the Gaussian distribution q(x_t-1|x_t, x_0) can be tractable as q(x_t-1|x_t, x_0)=𝒩(x_t-1 ; μ̃_t(x_t, x_0), β̃_t 𝐈), where μ̃_t(x_t, x_0):=√(α̅_t-1)β_t/1-α̅_tx_0+√(α_t)(1-α̅_t-1)/1-α̅_tx_t and β̃_t:=1-α̅_t-1/1-α̅_tβ_t. Furthermore, the first term in Eq. (<ref>) can be ignored as a constant. The discrete probability density of the second term can be estimated using continuous Gaussian distribution. Combined with the property Eq. (<ref>), D_KL(q(x_t-1|x_t, x_0) p_θ(x_t-1|x_t)) in the third term of Eq. (<ref>) is simplified to 𝔼[1/2 σ_t^21/√(α_t)(x_t(x_0, ϵ_t)-β_t/√(1-α̅_t)ϵ_t)-μ_θ(x_t(x_0, ϵ_t), t)^2], where the constant is omitted and ϵ_t ∼𝒩(0, 𝐈). Noting the availability of x_t, Ho et al. <cit.> transfer the predictions about μ_θ to ϵ_θ by the following parameterization μ_θ(x_t, t)=1/√(α_t)(x_t-β_t/√(1-α̅_t)ϵ_θ(x_t, t)). Regardless of the coefficients, since they find that removing them benefits sample quality, the popular loss used in DDPM is finally formulated as L_simple=𝔼_x_0 ∼ q(x_0), ϵ_t ∼𝒩(0, I)[ϵ_t-ϵ_θ(√(α̅_t)x_0+√(1-α̅_t)ϵ_t, t)^2]. Therefore, the network parameters are optimized by the mean squared error (MSE) loss between the Gaussian noise predicted by the network and the real noise for all time nodes of the reverse process except for t=1. Moreover, as discussed in <cit.>, the log-likelihood can be improved in the log domain by parameterizing the variance Σ_θ(x_t, t) = σ_t^2 𝐈 with the following interpolation between β_t and β̃_t Σ_θ(x_t, t)=exp(v logβ_t+(1-v) logβ̃_t), where v can be concatenated on another channel of ϵ_θ(x_t, t), serving as the output of the model. Finally, the loss function of our model is set to L_hybrid=L_simple+λ L_vlb, where we follow the setting in <cit.> and adopt λ = 0.001 to avoid L_vlb overwhelming L_simple. Once the training accomplished, sampling x_t-1 from p_θ(x_t-1|x_t) can be conducted with the following iterative update formula x_t-1=1/√(α_t)(x_t-1-α_t/√(1-α̅_t)ϵ_θ(x_t, t))+σ_t z, where z∼𝒩(0, 𝐈) (t>1) or z = 0 (t=1). Fig. <ref> illustrates the detailed stream of the seismic DDPM. The forward process does not require training and directly converts x_0 to the isotropic Gaussian noise. In the reverse process, the denoising model learns to predict the added noise for each time step. When gradually fitting the noise, the estimated value of x_0 can also be obtained at each time step according to x̂_0=√(1/α̅_t)x_t -√(1-α̅_t/α̅_t)ϵ_θ(x_t, t), even though it may not be satisfactory during mid-time stamps. §.§ Noise Matching Network The noise matching network used in <cit.> is based on the U-Net architecture with self-attention <cit.> and achieves impactful performance. Durall et al. <cit.> adopt this architecture to accomplish seismic data demultiple, denoising, and interpolation. Different from the aforementioned research works, we use a more appropriate network structure for seismic data generation, whose major stream inherits from the guided-diffusion model <cit.>. It adopts more architecture improvements to attain better generative quality. The overall architecture is displayed in Fig. <ref> using stacked residual blocks (Res Block) and attention blocks (Attn Block and MidAttn Block) for the encoder and decoder of U-Net. x_t is used as the network input for the denoising learning process to obtain predicted noise ϵ_θ(x_t, t), and the accompanying timestamp t is fed to each layer to embed time information by using the following Transformer sinusoidal time embedding (TE) <cit.> T E_(t, 2 i) =sin(t / 10000^2 i / d) T E_(t, 2 i+1) =cos(t / 10000^2 i / d), where d stands for the dimension of embedding vectors, t is the original time, and i is the dimension. Figuratively speaking, it serves for x_t to inform each layer about the current step of reverse diffusion. Fig. <ref> displays the detailed components of the Res Block, Attn Block, and MidAttn Block from left to right, where N=2 for the encoding process and N=3 for the decoding process. Upsampling and downsampling are executed after Res Block and Attn Block, except for the bottom layer, for a total of four operations. As illustrated in Fig. <ref>, the residual module is implemented with the inclusion of temporal information within. The MHSA module existing in Attn Block and MidAttn Block increases the receptive field of the model so that it can access all of the input seismic signals as introduced in <cit.>. Fig. <ref> makes a detailed illustration of the MHSA module, which receives the feature map as input and conducts three different linear operations W_q, W_k, and W_v to get the query matrix Q, key matrix K, and value matrix V. Each of them is divided into multiple heads, allowing the model to perform parallel computing and capture relevant information from different subspaces to integrate multiple attentions with different focuses. Self-attention is employed on the branches of each head to learn long-range correlations, which are formulated as Head_i=Attention(Q_i, K_i, V_i)=softmax(Q_i K_i^T/√(d_k)) V_i, where d_k is the dimension of queries and keys, and i stands for the number index of heads within {1, …, N_head}. We use N_head=4 in the noise matching network. Finally, MHSA is obtained by integrating the attention of each head together as MHSA(Q, K, V)=Concat(Head_1, …, Head_N_head). §.§ Cosine Noise Schedule DDPM <cit.> applies the linear noise schedule for β, where noise increases at a constant rate as the diffusion process proceeds. Since the primary concern in seismic data interpolation is the fidelity of the generated signal, as opposed to diversity, expediting the transition through the stage of high noise can facilitate the reconstruction of unknown areas. We adopt the following cosine schedule <cit.> α̅_t=f(t)/f(0), f(t)=cos(t / T+s/1+s·π/2)^2, where the offset s=0.008 is used to prevent β_t from being too small near t=0. The gray and blue dots in Fig. <ref> display the changing trend of α̅_t in the training process. Compared with the linear noise schedule, the cosine noise schedule can decelerate the global rate of information decay. Meanwhile, the gray dots in Fig. <ref> show the changing trend of β_t with respect to diffusion steps during the training process. The reduction of the strong noise states is observable, and it can aid in the interpolation of missing locations. To intuitively observe the differences between the generation processes of different noise schedules, Fig. <ref> illustrates the seismic data interpolation results x̂_0 at some middle timestamps during the reverse diffusion process. The interpolated content at intermediate timestamps under the linear noise schedule may deviate significantly from the ground truth distribution in Fig. <ref>. In contrast, the differences in distribution between each timestamp are much smaller under the cosine noise schedule, as shown in Fig. <ref>. This phenomenon occurs since the cosine noise schedule quickly passes through the high noise phase. Increased availability of known valid information facilitates the generation of missing regions, ensuring consistent alignment between the interpolated content and the ground truth. §.§ Implicit Conditional Interpolation with Resampling The trained seismic DDPM operates unconditionally, wherein the inverse diffusion process is generated directly from noise. However, for seismic data interpolation, it is essential to infer unknown signals from known regions. Hence, further refinement of the interpolation process is necessary. Inspired by the RePaint model <cit.>, we redesign the interpolation process to improve computation feasibility and interpolation quality. Different from the Seismic DDPM used in the training process, the inference process no longer satisfies the Markov assumption, and we adopt the DDIM sampling strategy to mitigate the computation burden existing in the RePaint model. Intuitively, it seems that the loss function of DDPM ultimately only depends on q(x_t|x_0) and the sampling process is only related to p(x_t-1|x_t), from which Song et al. <cit.> get inspiration for proposing denoising diffusion implict models (DDIM). They introduce the following non-Markovian inference q_σ(x_1: T|x_0):=q_σ(x_T |x_0) ∏_t=2^T q_σ(x_t-1|x_t, x_0), with a real vector σ = (σ_1, …, σ_T ) ∈ℝ_≥ 0. They choose q_σ(x_t-1|x_t, x_0) = 𝒩(√(α̅_t-1)x_0+√(1-α̅_t-1-σ_t^2)·x_t-√(α̅_t)x_0/√(1-α̅_t), σ_t^2 I) to ensure q_σ(x_t |x_0) remains consistent with the form in Eq. (<ref>). Under the above definition, the forward process q_σ(x_t |x_t-1, x_0) is still rebuilt as Gaussian and the VLB can then be written as L_VLB^σ:=𝔼_x_0: T∼ q_σ(x_0: T)[log q_σ(x_1: T|x_0)-log p_θ(x_0: T)] = 𝔼_x_0: T∼ q_σ(x_0: T)[log q_σ(x_T |x_0)+∑_t=2^T log q_σ(x_t-1|x_t, x_0)] - 𝔼_x_0: T∼ q_σ(x_0: T)[∑_t=1^T log p_θ^(t)(x_t-1|x_t)-log p_θ(x_T)]. Song et al. <cit.> have proved that the objective function, i.e., Eq. (<ref>), ultimately used by DDPM is a special case of L_VLB^σ under certain conditions, which allows us to directly use the pre-trained DDPM model as a solution for new objectives. With the aforementioned theoretical foundation, sampling from this non-Markovian generative process is focused on constructing σ to improve sample generation and reduce sample steps. Starting from Eq. (<ref>), the sampling operation can be formulated as x_t-1=√(α̅_t-1)(x_t-√(1-α̅_t)ϵ_θ(x_t, t)/√(α̅_t)) + √(1-α̅_t-1-σ_t^2)·ϵ_θ(x_t, t)+σ_tz, where the generative process becomes Markovian and equals DDPM if σ_t=√((1-α̅_t-1) /(1-α̅_t))√(1-α̅_t / α̅_t-1) for all t. Especially, it is reasonable to consider a sampling process of length less than T when q_σ(x_t |x_0) is fixed since the optimization result of DDPM essentially contains its optimization results for arbitrary subsequence parameters. Denoting the increasing time subsequence of the original time sequence [1, …, T] as τ=[τ_1, τ_2, …, τ_m] with of length m (the corresponding changes in α̅_τ_i and β_τ_i are shown in the red points of Figs. <ref> and <ref>, respectively), the σ_τ used in accelerated sampling process follows σ_τ_i(η)=η√((1-α̅_τ_i-1) /(1-α̅_τ_i))√(1-α̅_τ_i / α̅_τ_i-1), where η≥ 0. In particular, the generative process is defined as DDIM if η = 0 for all t since the variance σ keeps zero, so that the deterministic forward process becomes an implicit probabilistic model. Each step of the iterative reverse diffusion stage in the inference process uses the following implicit conditional interpolation formula x_τ_i-1=m⊙x_τ_i-1^valid+(1-m) ⊙x_τ_i-1^missing, where x_τ_i-1^valid is directly sampled from the forward diffusion process, i.e., Eq. (<ref>), which adds known information to the reverse process, and x_τ_i-1^missing is obtained by using the DDIM sampling formula Eq. (<ref>). As a result, x_τ_i-1 incorporates information from both known signals and model predicted signals before forwarding it to the next inverse diffusion step. The recovery of missing seismic data is designed as a implicit conditional interpolation process based on valid seismic data. Merely relying on the known signal as the condition is not adequate. Despite the relationship between the interpolated and known signals, maintaining interpolated signal continuity and consistency with known signals remains challenging. We introduce the resampling strategy <cit.> to enhance the consistency of sampling in the reverse process. After sampling x_τ_i-1 in the inverse diffusion process, the forward diffusion sampling is performed again to generate x_τ_i, with the difference being that x_τ_i now contains the information from x_τ_i-1^missing, thereby promoting consistency with known signals. Naturally, this kind of resampling operation cannot be performed only once. We define the jump length, denoted as L, to set how many times to backtrack for each resampling process, and we define the jump height, denoted as H, which determines the interval between time steps before and after two different resampling processes. In a word, our SeisDDIMR model comprises two key processes, i.e., the seismic DDPM training process and the implicit conditional interpolation process with resampling. Algorithm <ref> and Algorithm <ref> list the overview of our training and inference procedure, respectively. § EXPERIMENTS §.§ Evaluation Metrics We choose three metrics, i.e., MSE, signal-to-noise ratio (SNR), and peak signal-to-noise ratio (PSNR), to compare the fidelity of the interpolated seismic data. MSE between the interpolated seismic data {x̂^j}_j=1^n and the ground truth {x^j_gt}_j=1^n is calculated using MSE=1/n∑_j=1^n(x̂^j-x^j_gt)^2, where its value closer to 0 implies a higher fidelity of the interpolation result. The SNR for a single interpolated sample is defined as SNR=10log_10x_gt^2_F/x_gt-x̂^2_F, where ·_F represents the Frobenius norm. PSNR is calculated by the following formula as PSNR=10log_10MAX_x_gt^2/MSE, where MAX_x_gt refers to the highest value of x_gt. Obviously, larger SNR and PSNR both symbolize higher interpolation fidelity. The quality of the texture of the interpolation is evaluated using structural similarity (SSIM) <cit.>, which is widely used in the field of image generation following the formula SSIM(x_gt, x̂) =L(x_gt, x̂) · C(x_gt, x̂) · S(x_gt, x̂). Separately, L(·), C(·), and S(·) indicate similarities in luminance, contrast, and structure, and they are each defined as L(x_gt, x̂)=2 μ_x_gtμ_x̂+c_1/μ_x_gt^2+μ_x̂^2+c_1, C(x_gt, x̂)=2 σ_x_gtσ_x̂+c_2/σ_x_gt^2+σ_x̂^2+c_2, S(x_gt, x̂)=σ_x_gtx̂+c_3/σ_x_gtσ_x̂+c_3, where μ_x_gt(μ_x̂), σ_x_gt(σ_x̂), and σ_x_gtx̂ denote the mean value and standard deviation, and covariance, respectively. Constants c_1, c_2, and c_3 are typically set close to zero to prevent numerical instability. Thus, a higher SSIM implies a more similar texture. §.§ Data Set We validate our method over one open synthetic dataset provided by the Society of Exploration Geophysicists (SEG) C3 and one field dataset Mobil Avo Viking Graben Line 12 (MAVO). The SEG C3 dataset consists of 45 shots, each with a 201×201 receiver grid, 625 time samples per trace, and a sampling interval of 8 ms. We randomly extract 35,000 128×128 patches, out of which 25,000 patches are utilized for training, 5,000 for validation, and another 5,000 for testing. MAVO dataset comprises a 1001×120 receiver grid with 1500 time samples per trace. It is collected at a time rate of 4 ms and a spatial rate of 25 m. We randomly extract 10,000 256×112 patches, with 6,000 used for training, 2,000 used for validation, and 2,000 used for testing. All seismic patches are firstly normalized within the interval [0,1] by applying min-max normalization. §.§ Implementation Details The diffusion step for the Seismic DDPM model is set to 1000. We train the seismic DDPM model on the training sets of SEG C3 and MAVO separately, as described in Algorithm <ref>, with N iterations of 600,000 and 300,000, respectively. The noise matching network is optimized by AdamW with a learning rate of 1e-4. The batch size is set to 30 for the SEG C3 dataset and 15 for the MAVO dataset. Our SeisDDIMR test is conducted by using Algorithm <ref>, where we adopt diffusion sampling step m=100, jump length L=10, and jump height H=1. We compare our experimental results with 5 currently popular methods, including DD-CGAN <cit.>, cWGAN-GP <cit.>, PConv-UNet <cit.>, ANet <cit.>, and Coarse-to-Fine <cit.>. All of the experiments are implemented using Pytorch 1.12.1 and NVIDIA GeForce RTX 3090 GPU. §.§ Experimental Results We conduct Algorithm <ref> to accomplish our model testing. Interpolation reconstructions are performed on three missing categories of seismic data, and the experimental results are displayed below, followed by a comparison to other methods. It worth noting that our SeisDDIMR model is trained only once on each dataset, whereas other comparison methods are trained multiple times according to various trace missing forms, and the details of the training parameters remain consistent with their respective original papers. §.§.§ Random Missing Traces For each patch in the test sets of SEG C3 and MAVO, we design random missing phenomena with missing rates ranging from 0.2 to 0.6. The initial values of the missing traces are set to 0. The experimental results of random missing interpolation are listed on the left side of Tab. <ref> and Tab <ref>. Except for being slightly inferior in the SSIM, the other three metrics demonstrate that our model has better fidelity. Fig. <ref> shows the interpolated traces of the random missing MAVO test data. It can be seen that our method achieves the best performance both on amplitudes and phases. As a special case of random missing seismic data, the regular missing scenario will cause a serious aliasing problem. It usually appears as excessive artifacts in the high-frequency band of f-k spectra caused by erroneous estimation or interpolation of the missing data frequency. Fig. <ref> compares the f -k spectra of SEG C3 test data with 70% regular missing traces. Severe aliasing can be noticed in Fig. <ref>. It is obvious that the f-k spectra of the DD-CGAN, cWGAN-G, and ANet are all accompanied by significant high-frequency artifacts. Comparisons between the performance of all methods indicate that our model gains the most consistent f-k spectra with the ground truth. §.§.§ Consecutive Missing Traces We randomly create consecutive missing masks, with rates of missing data ranging from 0.1 to 0.4 (not including edge traces), and applied them to the patches in the SEG C3 and MAVO datasets. The value of missing traces is initialized to 0. The interpolation results of the middle four columns of Tab. <ref> and Tab. <ref> indicate that our model consistently surpasses other methods over these two datasets. we provide the comparisons via color plots from the SEG C3 test dataset as in Fig. <ref>. The ground truth data suffers from a consecutive missing of 40% resulting in degenerate missing data. Significant differences in the distribution are visible in the known portions on either side, which hinder the ability of some methods, such as PConv-UNet and ANet, that rely solely on feature similarity to perform the interpolation. DD-CGAN, cWGAN-GP, and Coarse-to-Fine methods based on GAN are still limited in their interpolation ability and tend to smooth small-scale seismic events due to large interval problems. Among these, cWGAN-GP demonstrates a high continuity in strong amplitude regions with biasedly sacrificing the performances on the fidelity of weak amplitudes. Coarse-to-Fine acquires fine details of weak amplitudes but it still exhibits significant differences from the ground truth data. Our model can consistently improve the performances over both strong and weak amplitudes, and keep anisotropy and spatial continuity of signals. §.§.§ Multiple Missing Traces For the SEG C3 and MAVO datasets, we construct multiple missing data scenarios with both consecutive and random missing cases and the range of the total missing rate is [0.2, 0.8]. The missing traces are also initialized with a value of 0. The corresponding quantitative comparison results are listed in the right four columns of Tab. <ref> and Tab. <ref>, where our model consistently outperforms other methods on four metrics. Fig. <ref> exhibits the interpolation results on a multiple missing example with total missing rare 54% from the MAVO test data. Our model produces artifact-free results, while other methods generally result in the ubiquity wide areas of artifacts, especially for DD-CGAN, cWGAN-G, and PConv-UNet, failing to provide reliable recovery. In addition, the amplitudes predicted by our model are more accurate and consistent with the ground truth. Our model is capable of handling most cases of seismic missing trace reconstruction. §.§ Model Robustness In order to study the impact of changes in the missing form on model capability, we evaluate the performance of different methods under the unmatched training and testing mask patterns, as shown in Tab. <ref>. First, when testing on the unseen consecutive mask pattern, the performance of the models trained on the random mask type has decreased significantly compared to those consecutive missing reconstruction results in Tab. <ref>. Second, although the model trained on the multiple mask form exhibits interpolation capability on different mask types, their results are still worse than those trained on the same mask pattern, as demonstrated in Tab. <ref>. Third, we can see that the consecutive missing model fails to interpolate random missing data, which is likely due to the significant differences in learning patterns between consecutive missing form and random missing form. It can be concluded that the effectiveness of generative models, which may be based on GAN or feature similarity, is sensitive to the constructed mask formula in training data. It seems better if the training missing construction can be closer to the missing form of the test data, although there easily exist gaps in the field scenarios. In contrast, our model training does not require rigorous construction of missing scenes and only needs one training to complete interpolation of any missing form while maintaining advantages in performance. §.§ Uncertainty Quantification Although various interpolation methods based on deep learning have accomplished promising results in the aforementioned publications, uncertainty quantification of the prediction is still absent subjecting to the fixed inference mode. However, providing measures of uncertainty for the predictions over or under confidence is important to improve the application security and avoid the cost of an error. The uncertainty in deep neural networks is divided into the reducible model uncertainty (also systemic or epistemic uncertainty) and irreducible data uncertainty (also statistical or aleatoric uncertainty) <cit.>. The model uncertainty is caused by inadequate models and unsuitable learning patterns, and data uncertainty is an inherent characteristic of data and cannot be reduced or eliminated by improving the subsequent model. There are multiple random sampling operations in our SeisDDIMR model as stated in Algorithm <ref>, thus we adopt the approach deriving from uncertainty ensemble methods to capture the total uncertainty by calculating the standard deviation of the interpolation results obtained after multiple repetitions of Algorithm <ref>. For a sample x, the uncertainty is computed as 1/n∑_i=1^n(x̂_i-μ̂_i)^2, where μ̂_i = 1/n∑_i=1^nx̂_i, x̂_i is the interpolation result of single test, and n is the repetition test number. Fig. <ref>-<ref> visualize the uncertainty in the interpolation results of random, consecutive, and multiple missing traces, respectively. The average interpolation results and average residual 1/n∑_i=1^n(x̂_i-x_gt) are also exhibited to provide an intuitive reference. It seems that unreliable reconstruction results are more likely to occur in the missing areas with patch edges and strong lateral amplitude variations, due to limited information and highly curved events. Besides, areas with high interpolation uncertainty also acquire large residuals. § ABLATION STUDY In this section, we will conduct a series of ablation studies on the key components and hyperparameters from three aspects including the MHSA module, seismic DDPM, and implicit interpolation with resampling strategy. §.§ MHSA Module We carry out our model training under different settings in the MHSA module with the total iteration number N = 300,000. The ablation study focuses on the location of MHSA in the network and the number of attention heads. Tab. <ref> lists the interpolation results on the validation set of SEG C3 data with multiple missing traces, where 32, 16, and 8 represent the resolution of the feature map in the noise matching network, respectively, meaning that the MHSA module is placed on the corresponding layer. We list the optimal configuration and its result on the top row. The following several rows show the results with one of the settings changed. It is evident that the best performance is achieved with the settings of attention head number N_head= 4 and attention location = 16, 8. §.§ Seismic DDPM The training of the Seismic DDPM is implemented by the process described in Algorithm <ref>. We selected three key components, i.e., diffusion steps T, loss function, and noise schedule, to validate the superiority of the adopted configuration. Seismic DDPM is trained on the SEG C3 dataset under different settings with the total iteration number N = 300,000, respectively. Tab. <ref> yields the interpolation results on the SEG C3 validation dataset with multiple missing traces. First, the number of diffusion steps T has a significant impact on the diffusion speed of our model. Increasing T refines the model, but also causes additional computational burden. Achieving a balance between computational efficiency and model performance requires a compromise configuration of the diffusion steps. Second, Tab. <ref> indicates that better interpolation results can be achieved by allowing the noise matching network to learn the noise variance σ_t under the hybrid loss L_hybrid. Finally, training seismic DDPM with different noise schedules indicates that using a linear schedule suffers from significant performance degradation. This finding supports our decision to adopt the cosine schedule, which has demonstrated better performance. §.§ Implicit Interpolation with Resampling Strategy To assess the efficacy of our proposed implicit interpolation and resampling strategy, we execute Algorithm <ref> under various configurations on the validation set of the MAVO dataset with multiple missing traces. The interpolation results are presented in Tab. <ref>. Comparing the interpolation performance of Algorithm <ref> based on DDPM and DDIM, it can be demonstrated that our proposed implicit interpolation significantly enhances the quality of signal recovery with an increase of 0.749 on SNR and PSNR. It is infeasible to explore all potential scenarios for diffusion sampling steps m, jump length L, and jump height H. Therefore, we aim to identify the most feasible options. To select the most suitable hyperparameters, we conduct algorithm <ref> repeatedly, applying various combinations. First, based on the trained DDPM, DDIM conducts m-step sampling. While increasing the number of sampling steps enhances the diffusion effect, it poses a higher computational burden during testing. The comparison of the performance of DDPM without the resampling strategy (last three rows in Tab. <ref>) reveals that a smaller value of m can be selected without significantly sacrificing performance. Consequently, we eventually adopt m = 100. Second, in regard to the values of L and H, it is easily found that an increase in their values results in an improved interpolation performance. However, this is accompanied by an increase in testing time. After considering both factors, L = 10 and H = 1 are ultimately chosen in our model. § CONCLUSION In this paper, we propose the SeisDDIMR method, which tackles the seismic data interpolation problem with a higher model robustness on various missing data scenarios. SeisDDIMR consists of two processes, including the training of seismic DDPM and implicit conditional interpolation with resampling. Seismic DDPM embeds seismic data into a denoising probability model framework. It achieves full-stage parameter sharing using the noise matching network based on the U-Net structure equipped with MHSA. The cosine noise schedule is introduced to speed up the transition during the high noise stage of seismic data. Implicit conditional interpolation with resampling, serving as the inference process of seismic DDPM, achieves flexible interpolation for different missing data scenarios and missing rates by utilizing the existing traces of the seismic data as a condition. Interpolation experiments on synthetic and field seismic data with multiple patterns of missing data demonstrate that our SeisDDIMR provides superior quality than existing methods and it also has advantages in robustness. Uncertainty quantification is provided to promote practical applications. In addition, a series of ablation experiments verify the rationality and effectiveness of hyperparameters and the design of key model components. In future studies, we will focus on extending our method to 3D or higher-dimensional seismic data interpolation. § ACKNOWLEDGMENT The authors would like to thank the Sandia National Laboratory and Mobil Oil Company for providing open data sets. IEEEtran [ < g r a p h i c s > ]Xiaoli Wei is currently pursuing the Ph.D. degree in statistics with the School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, China. Her research interests include seismic data reconstruction, deep learning, and uncertainty estimation. [ < g r a p h i c s > ]Chunxia Zhang received her Ph.D degree in Applied Mathematics from Xi'an Jiaotong University, Xi'an, China, in 2010. Currently, she is a Professor in School of Mathematics and Statistics at Xi'an Jiaotong University. She has authored and coauthored about 30 journal papers on ensemble learning techniques, nonparametric regression, etc. Her main interests are in the area of ensemble learning, variable selection, and deep learning. [ < g r a p h i c s > ]Hongtao Wang is currently pursuing the Ph.D. degree in statistics with the School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, China. His research interests include Bayesian statistics and deep learning. [ < g r a p h i c s > ]Chengli Tan received the B.S. degree in information and computing science and the M.S. degree in statistics from Xian Jiaotong University, Xian, China, in 2014 and 2017, where he is now pursuing the Ph.D. degree. His current research interests include adversarial learning, Bayesian nonparametrics, and stochastic optimization. [ < g r a p h i c s > ]Deng Xiong is a phD in Geophysics. He currently works for BGP, and serves as a Senior Engineer in R&D Center. He received his PhD from institute of Geology & Geophysics, Chinese Academy of Sciences in 2008. He is interested in near-surface velocity model building and seismic data reconstruction researches in recent years, and presently focuses on some industrial applications of artificial intelligence methods in seismic deblending and regularizations. [ < g r a p h i c s > ]Baisong Jiang is currently pursuing the Ph.D. degree in statistics with the School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, China. His research interests include seismic data reconstruction, deep learning, and image inpainting. [ < g r a p h i c s > ]Jiangshe Zhang received the M.S. and Ph.D. degrees in applied mathematics from Xi'an Jiaotong University, Xi'an, China, in 1987 and 1993, respectively, where he is currently a Professor with the Department of Statistics. He has authored and co-authored one monograph and over 80 conference and journal publications on optimization, and remote sensing image processing. His current research interests include Bayesian statistics, global optimization, ensemble learning, and deep learning. [ < g r a p h i c s > ]Sang-Woon Kim received the ME and the PhD degrees from Yonsei University, Seoul, Korea in 1980 and 1988, respectively, both in Electronic Engineering. In 1989, he joined the Department of Computer Engineering at Myongji University. Since 2019, he has continued his research as an Emeritus Professor there. His research interests include Statistical Pattern Recognition, Machine Learning. He is the author or coauthor of 51 regular papers and 13 books. He is a Life Senior Member of the IEEE and a member of the IEEK.
http://arxiv.org/abs/2307.05689v1
20230711180102
Magnetar emergence in a peculiar gamma-ray burst from a compact star merger
[ "H. Sun", "C. -W. Wang", "J. Yang", "B. -B. Zhang", "S. -L. Xiong", "Y. -H. I. Yin", "Y. Liu", "Y. Li", "W. -C. Xue", "Z. Yan", "C. Zhang", "W. -J. Tan", "H. -W. Pan", "J. -C. Liu", "H. -Q. Cheng", "Y. -Q. Zhang", "J. -W. Hu", "C. Zheng", "Z. -H. An", "C. Cai", "L. Hu", "C. Jin", "D. -Y. Li", "X. -Q. Li", "H. -Y. Liu", "M. Liu", "W. -X. Peng", "L. -M. Song", "S. -L. Sun", "X. -J. Sun", "X. -L. Wang", "X. -Y. Wen", "S. Xiao", "S. -X. Yi", "F. Zhang", "W. -D. Zhang", "X. -F. Zhang", "Y. -H. Zhang", "D. -H. Zhao", "S. -J. Zheng", "Z. -X. Ling", "S. -N. Zhang", "W. Yuan", "B. Zhang" ]
astro-ph.HE
[ "astro-ph.HE" ]
#1 #1 @includegraphics < g r a p h i c s > < g r a p h i c s > @includegraphics Magnetar emergence in a peculiar gamma-ray burst from a compact star merger H. Sun^1These authors contributed equally to this work, C.-W. Wang^2,3*, J. Yang^4,5*, B.-B. Zhang^4,5,6E-mail: [email protected], S.-L. Xiong^2E-mail: [email protected], Y.-H. I. Yin^4, Y. Liu^1, Y. Li^6, W.-C. Xue^2,3, Z. Yan^4, C. Zhang^1,3, W.-J. Tan^2,3, H.-W. Pan^1, J.-C. Liu^2,3, H.-Q. Cheng^1, Y.-Q. Zhang^2,3, J.-W. Hu^1, C. Zheng^2,3, Z.-H. An^2, C. Cai^7, L. Hu^6, C. Jin^1,3, D.-Y. Li^1, X.-Q. Li^2, H.-Y. Liu^1, M. Liu^1,3, W.-X. Peng^2, L.-M. Song^2,3, S.-L. Sun^8, X.-J. Sun^8, X.-L. Wang^2, X.-Y. Wen^2, S. Xiao^9, S.-X. Yi^2, F. Zhang^2, W.-D. Zhang^1, X.-F. Zhang^10, Y.-H. Zhang^10, D.-H. Zhao^1, S.-J. Zheng^2, Z.-X. Ling^1,3E-mail: [email protected], S.-N. Zhang^2,3, W. Yuan^1,3, B. Zhang^11,12E-mail: [email protected] Indian Institute of Technology Kharagpur ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== * National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China. * Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China. * University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China. * School of Astronomy and Space Science, Nanjing University, Nanjing 210023, China. * Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210022, China. * Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210023, China. * College of Physics and Hebei Key Laboratory of Photophysics Research and Application, Hebei Normal University, Shijiazhuang, Hebei 050024, China. * Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, 200083, China. * Guizhou Provincial Key Laboratory of Radio Astronomy and Data Processing, Guizhou Normal University, Guiyang 550001, China. * Innovation Academy for Microsatellites, Chinese Academy of Sciences, Shanghai, 201304, China. * Nevada Center for Astrophysics, University of Nevada Las Vegas, NV 89154, USA. * Department of Physics and Astronomy, University of Nevada Las Vegas, NV 89154, USA. The central engine that powers gamma-ray bursts (GRBs), the most powerful explosions in the universe, is still not identified. Besides hyper-accreting black holes, rapidly spinning and highly magnetized neutron stars, known as millisecond magnetars, have been suggested to power both long and short GRBs<cit.>. The presence of a magnetar engine following compact star mergers is of particular interest as it would provide essential constraints on the poorly understood equation of state for neutron stars<cit.>. Indirect indications of a magnetar engine in these merger sources have been observed in the form of plateau features present in the X-ray afterglow light curves of some short GRBs<cit.>. Additionally, some X-ray transients lacking gamma-ray bursts (GRB-less) have been identified as potential magnetar candidates originating from compact star mergers<cit.>. Nevertheless, smoking gun evidence is still lacking for a magnetar engine in short GRBs, and the associated theoretical challenges have been addressed<cit.>. Here we present a comprehensive analysis of the broad-band prompt emission data of a peculiar, very bright GRB 230307A. Despite its apparently long duration, the prompt emission and host galaxy properties point toward a compact star merger origin, being consistent with its association with a kilonova<cit.>. More intriguingly, an extended X-ray emission component emerges as the γ-ray emission dies out, signifying the emergence of a magnetar central engine. We also identify an achromatic temporal break in the high-energy band during the prompt emission phase, which was never observed in previous bursts and reveals a narrow jet with half opening angle of approximately 3.4^∘. At 15:44:06.650 UT on 7 March 2023 (denoted as T_0), Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor (GECAM)<cit.> was triggered by the extremely bright GRB 230307A<cit.>, which was also reported by the Fermi Gamma-ray Burst Monitor (GBM)<cit.>. Utilizing the unsaturated GECAM data (Methods), we determined the burst's duration (T_90) to be 41.52 ± 0.03 s in the 10–1000 keV energy range (Table <ref>, see Fig. <ref>a for the energy-band-dependent light curves). The peak flux and total fluence in the same energy range were found to be 4.48_-0.12^+0.08× 10^-4erg cm^-2 s^-1 and (3.01 ± 0.01) × 10^-3erg cm^-2, respectively, making it the hitherto second brightest GRB observed, only dwarfed by the brightest-of-all-time GRB 221009A<cit.>. The pathfinder of the Einstein Probe mission<cit.> named Lobster Eye Imager for Astronomy (LEIA)<cit.>, with its large field of view of 340 deg^2, caught the prompt emission of this burst in the soft X-ray band (0.5–4 keV) exactly at its trigger time<cit.> (Methods), revealing a significantly longer duration of 199.6_-2.2^+5.1 s and a peak flux of 3.65_-0.27^+0.33× 10^-7erg cm^-2 s^-1 (Fig. <ref>a and Table <ref>). Subsequent follow-up observations<cit.> indicate that the burst is most likely associated with a nearby galaxy at a redshift of z=0.065. Despite its long duration, the association of a kilonova signature<cit.> implies that this burst originates from a binary compact star merger. The broad-band (0.5–6000 keV, Methods) prompt emission data we have collected from GECAM and LEIA also independently point toward a compact star merger origin. The burst's placement on various correlation diagrams is consistent with the so-called type I GRBs<cit.>, i.e., those with a compact star merger origin (Methods and Fig. <ref>a-d). First, its relatively small minimum variability timescale is more consistent with type I GRBs. Second, it deviates from the Amati relation of type II GRBs (massive star core collapse origin) but firmly falls into the 1σ scattering region of type I GRBs. Third, it is a significant outlier of the anti-correlation between the spectral lags and peak luminosities of type II GRBs but is mixed in with other type I GRBs. Besides, the optical host galaxy data<cit.> adds yet one more support: the location of the burst has a significant offset from the host galaxy, which is at odds with type II GRBs but is fully consistent with type I GRBs. This is the second strong instance for long-duration type I GRBs after GRB 211211A<cit.> (see also Fig. <ref>). With the broad-band coverage jointly provided by GECAM-B/GECAM-C and LEIA throughout the prompt emission phase, one can perform a detailed temporal and spectral analysis of the data of GRB 230307A (Methods and Fig. <ref>). The light curves in the energy range of GECAM-B and GECAM-C exhibit synchronized pulses with matching peak and dip features (Fig. <ref>a). The time-resolved spectrum in the 15–6000 keV displays significant evolution (Fig. <ref>b and <ref>c), aligning with the “intensity tracking” pattern (i.e., the peak energy tracks the intensity's evolution<cit.>). When plotting the energy flux light curves in the logarithmic-logarithmic space (Fig. <ref>a), a break is identified around 18–27 s post-trigger in all five GECAM bands (15–30 keV, 30–100 keV, 100–350 keV, 350–700 keV, and 700–2000 keV), after which the light curves decay with indices of 2.44_-0.02^+0.02, 2.80_-0.02^+0.02, 3.42_-0.02^+0.02, 4.14_-0.08^+0.08, and 4.59_-0.17^+0.19 in these bands respectively (Methods). Compared with the theoretical prediction, a relation between the temporal decay slope and spectral slope solely due to the high-latitude emission effect after a sudden cessation of the emission (the so-called curvature effect<cit.>), these measured slopes are well consistent with the model's prediction, suggesting that the prompt high-energy emission abruptly ceased or significantly reduced its emission amplitude at about 18–27 s post-trigger (Methods and Fig. <ref>c). There is an additional achromatic break around 84 s, after which the three light curves (15–30 keV, 30–100 keV, and 100–350 keV) decay with much steeper slopes (Methods and Fig. <ref>a,c). This is consistent with the edge effect of a narrow jet that powers the prompt γ-ray emission, with a half opening angle of ∼ 3.4^∘ (R_ GRB/10^15  cm)^-1/2, where R_ GRB is the unknown GRB prompt emission radius from the central engine (Methods). This brings the collimation-corrected jet energy to ∼5.4× 10^49 erg, typical to type I GRBs<cit.>. In contrast to the hard X-rays and gamma-rays, the soft X-ray emission in the 0.5–4 keV LEIA-band exhibits a different behavior. The emission sustains for a much longer duration of >250 s in the form of a plateau followed by a decline. Its spectrum shows much less significant evolution within the first 100 s (Fig. <ref>d and Extended Data Table <ref>) compared to the high-energy GECAM spectrum. Notably, its spectral shape from the beginning up to ∼75 s deviates strongly from the extrapolations to lower energies of the spectral energy distributions derived from the GECAM data (Methods and Fig. <ref>b). These deviations cannot easily be ascribed to a simple spectral break at low energies as sometimes seen in GRBs<cit.>, but rather hint at a different radiation process dominating the LEIA band. As the high-energy emission suddenly ceases (with the decay slope controlled by the curvature effect), the late decay slope in the LEIA band is shallower than the curvature effect prediction, suggesting an intrinsic temporal evolution from the central engine (Fig. <ref>c). These facts suggest that the LEIA-band soft X-ray emission comes from a distinct emission component from the GRB, which emerges already from the onset of the burst (Fig. <ref>a). A smoothly broken power law fit to the light curve gives decay slopes of 0.40_-0.06^+0.05 and 2.33_-0.15^+0.16 before and after the break time at 79.90_-5.78^+5.42 s (Methods and Fig. <ref>a). This pattern is generally consistent with the magnetic dipole spin-down law of a newborn, rapidly spinning magnetar. The best fit of the luminosity light curve with the magnetar model<cit.> yields a dipole magnetic field of 2.21^+0.54_-0.55× 10^16 G, an initial spin period of 3.49^+0.85_-0.89 ms, and a radiation efficiency of 6.11^+2.02_-2.02× 10^-3 (Methods and Fig. <ref>a). An X-ray plateau may also be interpreted within a black hole engine with a long-term accretion disk<cit.>. However, such a long-lived disk would be eventually quickly evaporated so that the jet engine would cease abruptly. The fact that the final decay slope of the LEIA light curve is shallower than the curvature effect prediction rules out such a possibility and reinforces the magnetar interpretation. Indirect evidence of a magnetar engine in compact star mergers has been collected before in the form of internal plateaus in short GRBs<cit.> and some short-GRB-less X-ray transients such as CDF-S XT2<cit.>. Fig. <ref>b shows the comparison of the X-ray luminosity light curves between GRB 230307A and other magnetar candidates. It shows that GRB 230307A is well consistent with the other sources but displays the full light curve right from the trigger, thanks to the prompt detection by the wide-field X-ray camera of LEIA. It reveals the details of the emergence of the magnetar emission component and lends further support to the magnetar interpretation of other events. In the near future, the synergy between GRB monitors and wide-field soft X-ray telescopes (such as the Einstein Probe) may detect more cases and will generally provide more observational information to diagnose the physics of GRBs during the prompt emission stage. The identification of a magnetar engine from a merger event suggests that the neutron star equation of state is relatively stiff<cit.>. It also challenges modelers who currently fail to generate a relativistic jet from new-born magnetars<cit.>. One possibility is that a highly magnetized jet is launched seconds after the birth of the magnetar when the proto-neutron star cools down and the wind becomes clean enough<cit.>. In any case, the concrete progenitor of GRB 230307A remains an enigma. With a magnetar engine the progenitor can only be a binary neutron star (NS) merger or a (near Chandrasekhar limit) white dwarf – NS merger<cit.>. For the former possibility, one must explain why this burst is particularly long. A “tip-of-icecube” test<cit.> suggests that it is hard to make this burst to be a bright short GRB when one arbitrarily raises the background flux or moves the source to higher redshift (Methods and Table <ref>). For the latter scenario, the fact that the light curves and spectral evolution between GRBs 230307A and 211211A<cit.> do not fully resemble each other would suggest that the mechanism must be able to produce diverse light curves. Unfortunately, GRB 230307A was detected prior to the fourth operation run (O4) of LIGO-Virgo-KAGRA. Future multi-messenger observations of similar events hold the promise of eventually unveiling the identity of the progenitor of these peculiar systems<cit.>. 10 url<#>1 urlprefixURL [1]Usov1992NatauthorUsov, V. V.titleMillisecond pulsars with extremely strong magnetic fields as a cosmological source of -ray bursts. journalvolume357, pages472–474 (year1992). [2]Dai1998PhRauthorDai, Z. G.&authorLu, T.title-Ray Bursts and Afterglows from Rotating Strange Stars and Neutron Stars. journalvolume81, pages4301–4304 (year1998). [3]Zhang2001ApJauthorZhang, B.&authorMészáros, P.titleGamma-Ray Burst Afterglow with Continuous Energy Injection: Signature of a Highly Magnetized Millisecond Pulsar. journalvolume552, pagesL35–L38 (year2001). [4]Dai2006SciauthorDai, Z. G., authorWang, X. Y., authorWu, X. F.&authorZhang, B.titleX-ray Flares from Postmerger Millisecond Pulsars. journalSciencevolume311, pages1127–1129 (year2006). [5]Metzger2011MNRASauthorMetzger, B. D., authorGiannios, D., authorThompson, T. A., authorBucciantini, N.&authorQuataert, E.titleThe protomagnetar model for gamma-ray bursts. journalvolume413, pages2031–2056 (year2011). [6]Zhang2013ApJauthorZhang, B.titleEarly X-Ray and Optical Afterglow of Gravitational Wave Bursts from Mergers of Binary Neutron Stars. journalvolume763, pagesL22 (year2013). [7]Gao2016PRDauthorGao, H., authorZhang, B.&authorLü, H.-J.titleConstraints on binary neutron star merger product from short GRB observations. journalvolume93, pages044065 (year2016). [8]Margalit2019ApJauthorMargalit, B.&authorMetzger, B. D.titleThe Multi-messenger Matrix: The Future of Neutron Star Merger Constraints on the Nuclear Equation of State. journalvolume880, pagesL15 (year2019). [9]Rowlinson2013MNRASauthorRowlinson, A., authorO'Brien, P. T., authorMetzger, B. D., authorTanvir, N. R.&authorLevan, A. J.titleSignatures of magnetar central engines in short GRB light curves. journalvolume430, pages1061–1087 (year2013). [10]Lu2015ApJauthorLü, H.-J., authorZhang, B., authorLei, W.-H., authorLi, Y.&authorLasky, P. D.titleThe Millisecond Magnetar Central Engine in Short GRBs. journalvolume805, pages89 (year2015). [11]Xue2019NatauthorXue, Y. Q.et al.titleA magnetar-powered X-ray transient as the aftermath of a binary neutron-star merger. journalvolume568, pages198–201 (year2019). [12]Sun2019ApJauthorSun, H.et al.titleA Unified Binary Neutron Star Merger Magnetar Model for the Chandra X-Ray Transients CDF-S XT1 and XT2. journalvolume886, pages129 (year2019). [13]Ciolfi2020MNRASauthorCiolfi, R.titleCollimated outflows from long-lived binary neutron star merger remnants. journalvolume495, pagesL66–L70 (year2020). [14]Levan2023arXivauthorLevan, A.et al.titleJWST detection of heavy neutron capture elements in a compact object merger. journalarXiv e-printspagesarXiv:2307.02098 (year2023). [15]Li2021RDTMauthorLi, X. Q.et al.titleThe technology for detection of gamma-ray burst with GECAM satellite. journalRadiation Detection Technology and Methodsvolume6, pages12–25 (year2021). [16]ZhangDL2023arXivauthorZhang, D.et al.titleThe performance of SiPM-based gamma-ray detector (GRD) of GECAM-C. journalarXiv e-printspagesarXiv:2303.00537 (year2023). [17]Xiong2023GCNauthorXiong, S., authorWang, C., authorHuang, Y.&authorGecam Team. titleGRB 230307A: GECAM detection of an extremely bright burst. journalGRB Coordinates Networkvolume33406, pages1 (year2023). [18]Fermi2023GCN33405authorFermi GBM Team. titleGRB 230307A: Fermi GBM Final Real-time Localization. journalGRB Coordinates Networkvolume33405, pages1 (year2023). [19]An2023arXivauthorAn, Z.-H.et al.titleInsight-HXMT and GECAM-C observations of the brightest-of-all-time GRB 221009A. journalarXiv e-printspagesarXiv:2303.01203 (year2023). [20]Yuan2022authorYuan, W., authorZhang, C., authorChen, Y.&authorLing, Z.titleThe Einstein Probe Mission. In booktitleHandbook of X-ray and Gamma-ray Astrophysics, pages86 (year2022). [21]Zhang2022ApJLauthorZhang, C.et al.titleFirst Wide Field-of-view X-Ray Observations by a Lobster-eye Focusing Telescope in Orbit. journalvolume941, pagesL2 (year2022). [22]Ling2023arXivauthorLing, Z. X.et al.titleThe Lobster Eye Imager for Astronomy Onboard the SATech-01 Satellite. journalarXiv e-printspagesarXiv:2305.14895 (year2023). [23]Liu2023GCNauthorLiu, M. J.et al.titleGRB 230307A: soft X-ray detection with LEIA. journalGRB Coordinates Networkvolume33466, pages1 (year2023). [24]YHYang2023prepauthorYang, Y. H.journalin prep. (year2023). [25]Zhang2009ApJauthorZhang, B.et al.titleDiscerning the Physical Origins of Cosmological Gamma-ray Bursts Based on Multiple Observational Criteria: The Cases of z = 6.7 GRB 080913, z = 8.2 GRB 090423, and Some Short/Hard GRBs. journalvolume703, pages1696–1724 (year2009). [26]Yang2022NaturauthorYang, J.et al.titleA long-duration gamma-ray burst with a peculiar origin. journalvolume612, pages232–235 (year2022). [27]Golenetskii1983NatauthorGolenetskii, S. V., authorMazets, E. P., authorAptekar, R. L.&authorIlinskii, V. N.titleCorrelation between luminosity and temperature in -ray burst sources. journalvolume306, pages451–453 (year1983). [28]Kumar2000ApJauthorKumar, P.&authorPanaitescu, A.titleAfterglow Emission from Naked Gamma-Ray Bursts. journalvolume541, pagesL51–L54 (year2000). [29]wang2018authorWang, X.-G.et al.titleGamma-Ray Burst Jet Breaks Revisited. journalvolume859, pages160 (year2018). [30]Oganesyan2017ApJauthorOganesyan, G., authorNava, L., authorGhirlanda, G.&authorCelotti, A.titleDetection of Low-energy Breaks in Gamma-Ray Burst Prompt Emission Spectra. journalvolume846, pages137 (year2017). [31]Lu2023authorLu, W.&authorQuataert, E.titleLate-time accretion in neutron star mergers: Implications for short gamma-ray bursts and kilonovae. journalvolume522, pages5848–5861 (year2023). [32]Lu2014MNRASauthorLü, H.-J., authorZhang, B., authorLiang, E.-W., authorZhang, B.-B.&authorSakamoto, T.titleThe `amplitude' parameter of gamma-ray bursts and its implications for GRB classification. journalvolume442, pages1922–1929 (year2014). [33]Yin2023arXivauthorYin, Y.-H. I.et al.titleGRB 211211A-like Events and How Gravitational Waves May Tell Their Origin. journalarXiv e-printspagesarXiv:2304.06581 (year2023). [34]Li2016ApJSauthorLi, Y., authorZhang, B.&authorLü, H.-J.titleA Comparative Study of Long and Short GRBs. I. Overlapping Properties. journalvolume227, pages7 (year2016). [table]name= Table[figure]name= Fig. § METHODS §.§ Multi-mission observations of GRB 230307A GRB 230307A triggered in real-time the Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor (GECAM) and the Fermi Gamma-ray Burst Monitor (GBM) almost simultaneously[GECAM trigger time is 2023-03-07T15:54:06.650 UTC, which is 21 ms earlier than that of Fermi/GBM.]. The extreme brightness of GRB 230307A was first reported by GECAM-B with its real-time alert data downlinked by the Beidou satellite navigation system<cit.>, which was subsequently confirmed by Fermi/GBM<cit.> and Konus-Wind<cit.>. Preliminary location of this burst measured by GECAM is consistent with that of GBM within error<cit.>. Refined localization in gamma-ray band was provided by the Inter-Planetary Network (IPN) triangulation<cit.> and later improved by follow-up observations of the X-ray Telescope (XRT) aboard the Neil Gehrels Swift Observatory<cit.>. The Lobster Eye Imager for Astronomy (LEIA) detected the prompt emission of this burst in the 0.5–4 keV soft X-ray band<cit.>. The burst was also followed up by the Gemini-South<cit.> and the James Webb Space Telescope (JWST) at several epochs<cit.>, unveiling a fading afterglow with possible signatures of kilonova in the optical and infrared bands and a candidate host galaxy at a redshift of z = 0.065<cit.> (Table <ref>). §.§ Data reduction LEIA. The Lobster Eye Imager for Astronomy<cit.>, a pathfinder of the Einstein Probe mission of the Chinese Academy of Sciences, is a wide-field (18.6^∘× 18.6^∘) X-ray focusing telescope built from novel technology of lobster eye micro-pore optics. The instrument operates in the 0.5–4 keV soft X-ray band, with an energy resolution of 130 eV (at 1.25 keV) and a time resolution of 50 ms. LEIA is onboard the Space Advanced Technology demonstration satellite (SATech-01), which was launched on 2022 July 27 and is operating in a Sun-synchronous orbit with an altitude of 500 km and an inclination of 97.4^∘. Since LEIA operates only in the Earth's shadow to eliminate the effects of the Sun, the observable time out of the radiation belt at high geo-latitude regions is ∼1000 s for each orbit. The observation of GRB 230307A was conducted from 15:42:32 UT (94 s earlier than the GECAM trigger time) to 16:00:50 UT on 7 March 2023 with a net exposure of 761 s. GRB 230307A was detected within the extended field of view (about 0.6^∘ outside the nominal field of view) of LEIA<cit.>(Extended data Fig. <ref>). The X-ray photon events were processed and calibrated using the data reduction software and calibration database (CALDB) designed for the Einstein Probe mission (Liu et al. in prep.). The CALDB is generated based on the results of on-ground and in-orbit calibration campaigns (Cheng et al. in prep.). The energy of each event was corrected using the bias and gain stored in CALDB. Bad/flaring pixels were also flagged. Single-, double-, triple-, and quadruple-events without anomalous flags were selected to form the cleaned event file. The image in the 0.5–4 keV range was extracted from the cleaned events (Extended Data Fig. <ref>a). The position of each photon was projected into celestial coordinates (J2000). The light curve and the spectrum of the source and background in a given time interval were extracted from the regions indicated in Extended Data Fig. <ref>a. Since the peak count rate is only 2 ct/frame over the extended source region, the pile-up effect is negligible in the LEIA data. GECAM. GECAM is a dedicated all-sky gamma-ray monitor constellation funded by the Chinese Academy of Sciences. The original GECAM mission<cit.> is composed of two microsatellites (GECAM-A and GECAM-B) launched in December 2020. GECAM-C<cit.> is the 3rd GECAM spacecraft, also onboard the SATech-01 satellite as LEIA. Each GECAM spacecraft has an all-sky field of view unblocked by the Earth, capable of triggering bursts in real-time<cit.> and distributing trigger alerts promptly with the Global Short Message Communication of Beidou satellite navigation system<cit.>. As the main instrument of GECAM, most gamma-ray detectors (GRDs) operate in two readout channels: high gain (HG) and low gain (LG), which are independent in terms of data processing, transmission, and dead-time<cit.>. Comprehensive ground and cross calibrations have been conducted on the GRDs of both GECAM-B and GECAM-C<cit.>. GRB 230307A was detected by GECAM-B and GECAM-C, while GECAM-A was offline. GECAM-B was triggered by this burst and automatically distributed a trigger alert to GCN Notice about 1 minute post-trigger[GECAM real-time alert for GRB 230307A: <https://gcn.gsfc.nasa.gov/other/160.gecam>]. GECAM-C also made the real-time trigger onboard, while the trigger alert was disabled due to the high latitude region setting<cit.>. With the automatic pipeline processing the GECAM-B real-time alert data (Huang et al. RAA, in press), we promptly noticed and reported that this burst features an extreme brightness<cit.>, which initiated the follow-up observations. Both GECAM-B and GECAM-C were working in inertial pointing mode during the course of GRB 230307A. Among all 25 GRDs of GECAM-B, GRD04 maintains a constant minimum zenith angle of 12.7^∘ throughout the duration of the burst. GRD01 and GRD05 also exhibit small zenith angles of 32.4^∘ and 32.7^∘, respectively. Thus these three detectors are selected for subsequent analysis. For GECAM-B GRDs, the HG channel operates from ∼ 40 to 350 keV, while the LG channel from ∼ 700 keV to 6 MeV. Among all 12 GRDs of GECAM-C, GRD01 exhibits the most optimal incident angle of 10.1^∘ throughout the burst and is selected in the subsequent analysis. For GECAM-C/GRD01, the HG channel operates from ∼ 6 to 350 keV. Since the detector response for 6–15 keV is affected by the electronics and is subject to further verification<cit.>, we only use >15 keV for spectral analysis in this work. The background estimation methodology employed for GECAM-C/GRD01 involves fitting a combination of first and second-order exponential polynomials to the adjacent background data, followed by interpolating the background model to the time intervals of the burst (Extended Data Fig. <ref>a). The efficacy of this background estimation for GECAM-C/GRD01 is verified by the comparison with GECAM-B data (Extended Data Fig. <ref>b). We note that GRB 230307A was so bright that the Fermi/GBM observation suffered from data saturation<cit.>. GECAM has dedicated designs to minimize data saturation for bright bursts<cit.>. For GECAM, the engineering count rate records the number of events processed onboard, while the event count rate records the number of events received on ground. In the case of data saturation, these two count rates would differ significantly. As shown in Extended Data Fig. <ref>c, the negligible discrepancy between these two count rates, due to the limited digital accuracy on the count numbering, confirms no count loss in event data and indicates the absence of saturation for both GECAM-B and GECAM-C. §.§ Temporal analysis Duration. The light curves are obtained through the process of photon counts binning (Fig. <ref>). The light curve for LEIA is obtained in 0.5–4 keV with a bin size of 1 s. The light curves for GECAM-B are generated using the bin size of 0.4 s in the energy ranges of 100–350 keV, 350–700 keV, and 700–2000 keV, by combining the data from three selected detectors (namely, GRD01, GRD04, and GRD05). The GECAM-C light curves are derived by binning the photon counts from GRD01 with a bin size of 0.4 s in the energy ranges of 6–15 keV, 15–30 keV, and 30–100 keV. The burst duration, denoted as T_ 90, is determined by calculating the time interval between the epochs when the total accumulated net photon counts reach the 5% and 95% levels. The durations obtained from the multi-wavelength light curve are annotated in Fig. <ref>. It is found that the duration of the burst significantly increases towards the lower energy range. We also calculate the duration within 10–1000 keV based on the data from GECAM-B and list it in Table <ref>. Amplitude parameter. The amplitude parameter<cit.> is a metric used to classify GRBs and is defined as f=F_ p/F_ b, denoting the ratio between the peak flux F_ p and the background flux F_ b at the same time epoch. A long-duration type II GRB may be disguised as a short-duration type I GRB due to the tip-of-iceberg effect. To distinguish intrinsically short-duration type I GRBs from ostensible short-duration type II GRBs, an effective amplitude parameter, f_ eff=ϵ f=F_ p^'/F_ b, can be defined for long-duration type II GRBs by quantifying the tip-of-iceberg effect, where F_ p^' is the peak flux of a pseudo GRB whose amplitude is lower by a factor ϵ from an original long-duration type II GRB so that its duration is just shorter than 2 s. Generally speaking, the f_ eff values of long-duration type II GRBs are systematically smaller than the f values of short-duration type I GRBs, thereby facilitating their distinction from one another. Utilizing the procedure presented in Ref.<cit.>, the effective amplitude of GRB 230307A is determined to be f_ eff=1.23±0.07 within the energy range of 10–1000 keV (Table <ref>). Such small f_ eff value aligns with the characteristics typically exhibited by long-duration GRBs. Variability. The minimum variability timescale (MVT) is defined as the shortest timescale of significant variation that exceeds statistical noise in the GRB temporal profile<cit.>. It serves as an indicator of both the central engine's activity characteristics and the geometric dimensions of the emitting region. The median values of the minimum variability timescale in the rest frame (i.e., MVT/(1+z)) for type I and type II GRBs are found to be 10 ms and 45 ms, respectively. To determine the MVT, we utilize the Bayesian block algorithm<cit.> on the entire light curve within the 10–1000 keV energy range to identify the shortest block that satisfies the criterion of encompassing the rising phase of a pulse. We find that the MVT of GRB 230307A is about 9.35 ms (Table <ref>), which is more consistent with type I GRBs rather than type II GRBs in the distribution of the MVTs<cit.> (Fig. <ref>a). We also utilize the continuous wavelet transform (CWT) method to derive MVT and obtain a consistent outcome in accordance with the Bayesian block algorithm. Spectral lag. Spectral lag refers to the time delay between the soft-band and hard-band background-subtracted light curves. It may be attributed to the curvature effect in the relativistic outflow. Upon reaching the observer, on-axis photons are boosted to higher energies, while off-axis photons receive a smaller boost and must travel a longer distance. Type II GRBs usually exhibit considerable spectral lags, while type I GRBs tend to have tiny lags<cit.>, indicating the difference in their emission region sizes. Measurement of the spectral lag can be achieved by determining the time delay corresponding to the maximum value of the cross-correlation function<cit.>. Following the treatment in Ref.<cit.>, we use the background-subtracted light curves of GRB 230307A to measure the rest-frame lag between rest-frame energy bands 100–150 and 200–250 keV to be 1.6_-1.2^+1.4 ms (Table <ref>). We note that GRB 230307A manifests a very tiny spectral lag, an indicator that points towards being a type I GRB (Fig. <ref>c). §.§ Spectral analysis LEIA spectral fitting. We perform a detailed spectral analysis using the LEIA data. The energy channels in the range of 0.5–4 keV are utilized and re-binned to ensure that each energy bin contains at least ten counts. We employ three kinds of time segmentation approaches to extract LEIA spectra: * LEIA S-I: the time interval from 0 to 200 s is divided into three slices (0–50 s, 50–100 s, 100–200 s) to investigate the possible evolution of X-ray absorption; * LEIA S-II: 23 time slices with sufficient time resolution are obtained by accumulating 100 photon counts for each individual slice; * LEIA S-III: the time interval from 0 to 140 s is divided into ten distinct time slices. This partitioning is designed to align with the temporal divisions of the GECAM spectra, enabling subsequent comparisons and joint spectral fitting. For each of the above time slices, we generate the source spectrum and background spectrum, and the corresponding detector redistribution matrix and the ancillary response. First, the spectra of LEIA S-I are individually fitted with the XSPEC<cit.> model phabs*zphabs*zpowerlw, where the first and second components are responsible for the Galactic absorption and intrinsic absorption (N_ H), and the third one is a redshift-corrected power law function. We employ CSTAT<cit.> as the statistical metric to evaluate the likelihood of LEIA spectral fitting where both the source and background spectra are Poisson-distributed data. The column density of the Galactic absorption in the direction of the burst is fixed at 9.41 × 10^20  cm^-2<cit.> and the redshift is fixed at 0.065. It is found that all three spectra of S-I can be reproduced reasonably well by the absorption modified power law model (Extended Data Fig. <ref>b). The best-fit values and confidence contours of the column density and photon index are shown in Extended Data Fig. <ref>c. The fitted column densities are generally consistent within their uncertainties, indicating no significant variations of the absorption feature within 200 s. A time-averaged absorption of N_ H=2.73 × 10^21  cm^-2, yielded from the simultaneous fitting of all three S-I spectra, is thus adopted and fixed in all subsequent spectral analysis. We then proceed with the fitting of the LEIA S-II spectra. The obtained results for the photon index (Γ_ ph), normalization, and the corresponding fitting statistic are presented in Extended Data Table <ref>. By employing a redshift of z = 0.065, we further calculated the unabsorbed flux (Fig. <ref>a) and determined the luminosity for each S-II spectrum (Fig. <ref>a,b). Additionally, we cross-validated our findings by analyzing spectra with higher photon statistics, specifically 200 photons per time bin, and found that the alternate spectra yielded consistent results in our analysis. We also perform a spectral fit to the time-averaged spectrum during the whole observation interval of S-II, and the derived power-law index (α = -Γ_ ph) is shown in Table <ref>. Finally, the LEIA S-III spectra are employed for SED analysis. GECAM spectral fitting. We conduct a thorough time-resolved and time-integrated spectral analysis using the data from the GRD04 and GRD01 of GECAM-B, as well as the GRD01 of GECAM-C. Each GRD detector has two independent readout channels, namely, high gain (HG) and low gain (LG). We utilize both the high and low gain data of the GECAM-B detectors with effective energy ranges of 40–350 and 700–6000 keV, respectively. Additionally, we only used the high gain data of the GECAM-C detector with an effective energy range of 15–100 keV but ignored the channels within 35–42 keV around the Iodine K-edge at 38.9 keV. We employ three distinct time segmentation methods over the time interval of 0–140 s: * GECAM S-I: the entire time interval is treated as a single time slice for time-integrated spectral analysis; * GECAM S-II: the time interval is divided into 99 time slices with sufficient spectral resolutions and approximately equal signal-to-noise levels; * GECAM S-III: the time interval is partitioned into ten time slices to match the time division of LEIA spectra for comparison and joint fitting. For each of the above time slices, we generate a source spectrum, a background spectrum, and a response matrix for each gain mode of each detector. Then we perform spectral fitting by utilizing the Python package, MySpecFit, in accordance with the methodology outlined in Refs.<cit.>. The MySpecFit package facilitates Bayesian parameter estimation by wrapping the widely-used Fortran nested sampling implementation Multinest<cit.>. PGSTAT<cit.> is utilized for GECAM spectral fitting, which is appropriate for Poisson data in the source spectrum with Gaussian background in the background spectrum. The cutoff power law (CPL) model is adopted to fit GECAM S-I and S-II spectra. The CPL model can be expressed as N(E)=A(E/100 keV)^α exp(-E/E_ c), where α is low-energy photon spectral index, and A is the normalization parameter in units of photons cm^-2 s^-1 keV^-1. The peak energy E_ p of ν f_ν spectrum is related to the cutoff energy E_ c through E_ p=(2+α)E_ c. Extended Data Table <ref> lists the spectral fitting results and corresponding fitting statistics for GECAM S-I and S-II spectra. It should be noted that we use cutoff energy as a substitute for peak energy when the 1σ lower limit of α falls below -2. Fig. <ref>b and <ref>c illustrate the significant “intensity tracking” spectral evolution in terms of E_ p and α, respectively. Spectral energy distribution. We perform spectral fitting on the spectra of LIEA S-III and GECAM S-III to examine the spectral energy distribution (SED) from soft X-rays to gamma-rays. To avoid the imbalance in fitting weights to lose spectral information due to significant differences in the fitting statistics between LEIA and GECAM, we first fit the spectra of LEIA and GECAM independently. For the LEIA S-III spectral fitting, we still employ the redshift-corrected power-law (PL) model with Galactic absorption (fixed to a hydrogen column density of 9.41× 10^20  cm^-2) and intrinsic absorption (fixed to a hydrogen column density of 2.73× 10^21  cm^-2). The PL model is defined as: N(E)=A(E/100 keV)^α, where α is low-energy photon spectral index, and A is the normalization parameter in units of photons cm^-2 s^-1 keV^-1. For the GECAM S-III spectral fitting, we adopt the redshift-corrected CPL model, except for the three time slices of 13–18 s, 18–25 s, and 25–35 s, which require an additional cutoff and power law function below ∼50 keV in the model. Therefore, we employ the redshift-corrected BAND-Cut model<cit.> to fit the three time slices. The BAND-Cut model is defined as N(E)={[ AE^α_1 exp(-E/E_1),  E ≤ E_ b,; AE_ b^α_1-α_2 exp(α_2-α_1)E^α_2 exp(-E/E_2),  E > E_ b,; ]. where α_1 and α_2 represent the spectral indices of the two low-energy power law segments smoothly connected at the break energy E_ b=E_1E_2/E_2-E_1(α_1-α_2), A is the normalization parameter in units of photons cm^-2 s^-1 keV^-1. The peak energy of ν f_ν spectrum is defined as E_ p=E_2(2+α_2), which locates at the high-energy exponential cutoff. The spectral fitting results, along with the corresponding fitting statistics, are represented in Extended Data Table <ref>. The comparison between observed and model-predicted count spectra and the residuals (defined as ( data- model)/ data error) is depicted in Extended Data Fig. <ref>a. The SEDs derived from the spectral fittings at different time intervals are displayed in Fig. <ref>b. We notice that, in the early time intervals (before about 75 s), the PL spectra of LEIA (0.5–4 keV) and CPL or BAND-Cut spectra of GECAM (15–6000 keV) do not align with the natural extrapolation of each other (Fig. <ref>b). This is mainly manifested by significant differences in the spectral index and amplitude of the LEIA and GECAM spectra. Such inconsistency suggests the presence of two distinct prompt emission components, each dominating the spectra of LEIA and GECAM, respectively. We also note that in the last two time slices, the spectra from both instruments can be seamlessly connected and adequately described by a single CPL model (Extended Data Fig. <ref>a and Fig. <ref>b), indicating that in the later stages (after approximately 75 s), a single component progressively dominates the spectra from both instruments. Another approach to demonstrate the SED involves combining the LEIA S-III and GECAM S-III spectra to perform a joint fitting. We initially adopt the redshift-corrected CPL model for joint fitting, but for certain time intervals, an additional spectral cutoff and power law function need to be introduced in the low-energy range of the CPL model to connect both LEIA and GECAM spectra, for which we consider the redshift-corrected BAND-Cut model. When both the CPL and BAND-Cut models can be constrained, the model comparison is performed based on the Bayesian information criterion<cit.> (defined as BIC=-2 lnℒ+k ln N, where ℒ is the maximum likelihood value, k is the number of model's free parameters, and N is the number of data points), where a model with a smaller BIC value is preferred. Extended Data Table <ref> represents the fitting results of the preferred models and corresponding statistics. The comparison between observed and model-predicted count spectra and the residuals are represented in Extended Data Fig. <ref>b. The SEDs derived from the joint spectral fittings at different time intervals are displayed in Extended Data Fig. <ref>c. Considering the possibility of the model overlooking some spectral features in the soft X-ray band due to the significantly lower contribution of LEIA data to the fitting statistics, we overlay the model-predicted count spectra and SEDs obtained from the independent LEIA fittings onto Extended Data Fig. <ref>b and <ref>c, respectively. The comparison between the joint SEDs and the independent LEIA SEDs further confirms that, in the early stage (before about 75 s), even with the introduction of an additional break and power law at low energies, the model still can not account for the unique spectral features of LEIA, which once again points to the conclusion of two distinct components. In contrast, the consistency between the joint SEDs and LEIA independent SEDs in the late stage (after about 75 s) suggests a single dominant component. §.§ Classification Variability timescale versus duration. The temporal variability timescale of GRBs may conceal imprints of central engine activity and energy dissipation processes. On average, the minimum variability timescale of short-duration type I GRBs is significantly shorter than that of long-duration type II GRBs, providing a new clue to distinguish the nature of GRB progenitors and central engines<cit.>. We collected previous research samples, redrawn the MVT-T_ 90 diagram, and overlaid GRB 230307A and GRB 211211A on it (Fig. <ref>a). It is noteworthy that these two GRBs are outliers, as their MVTs are more consistent with that of type I GRBs despite being long-duration GRBs. Peak energy versus isotropic energy. The E_ p,z-E_γ,iso diagram serves as a unique classification scheme in the study of GRB energy characteristics, as different physical origins of GRBs typically follow distinct tracks<cit.>. We first replot the E_ p,z-E_γ,iso diagram (Fig. <ref>b) based on previous samples of type I and type II GRBs with known redshifts<cit.>. Here, E_ p,z=E_ p(1 + z) represents the rest-frame peak energy, while E_γ,iso denotes the isotropic energy. The relations between E_ p,z and E_γ,iso can be modeled using a linear relationship, logE_ p,z=b + k logE_γ,iso, for both GRB samples. The fitting process is implemented using the Python module emcee<cit.>, and the likelihood is determined using the orthogonal-distance-regression (ODR) method<cit.>. The ODR method is appropriate in this case as the data satisfy two criteria: (1) a Gaussian intrinsic scatter σ_ int along the perpendicular direction; (2) independent errors σ_x_i and σ_y_i on both the x and y axes. The log-likelihood function can be expressed as lnℒ=-1/2∑_i [ ln(2πσ_i^2)+Δ_i^2/σ_i^2], with the perpendicular distance Δ_i^2=(y_i-kx_i-b)^2/k^2+1, and the total perpendicular uncertainties σ_i^2=k^2σ_x_i^2+σ_y_i^2/k^2+1 + σ_ int^2, where the subscript i runs over all data points. The best-fitting parameters with 1σ uncertainties are k=0.36_-0.05^+0.04, b=-15.61_-2.14^+2.51 and logσ_ int=-1.29_-0.13^+0.13 for type I GRBs, and k=0.39_-0.02^+0.02, b=-17.82_-0.98^+0.93 and logσ_ int=-1.42_-0.05^+0.05 for type II GRBs. The best-fit correlations and corresponding 1σ intrinsic scattering regions are presented in Fig. <ref>b. The peak energy of GRB 230307A is constrained to be E_ p=1254.68_-17.99^+14.95 keV by the spectral fitting to GECAM S-I. Given a redshift of 0.065, the isotropic energy of GRB 230307A can be calculated as E_γ,iso=(3.08±0.01)×10^52 erg. We overplot both long-duration GRB 230307A and GRB 211211A on the E_ p,z-E_γ,iso diagram. As can be seen from Fig. <ref>b, GRB 211211A resides in an intermediate area between the tracks of type I and type II GRBs, while GRB 230307A is firmly located within the 1σ region of the type I GRB track. In addition, GRB 230307A poses higher total energy and harder spectra, likely indicating a more intense merger event. We also consider the scenario where the host galaxy has a redshift of 3.87<cit.> and notice that in this case, the isotropic energy of GRB 230307A is ∼ 10^56 erg, which is an order of magnitude larger than that of the brightest-of-all-time GRB 221009A<cit.>. Such extremely high energy is very unlikely consistent with the known sample of GRBs. Peak luminosity versus spectral lag. An anti-correlation exists between the spectral lag and peak luminosity in the sample of type II GRB with a positive spectral lag<cit.>. Such anti-correlation can serve as a physically ambiguous indicator, suggesting that GRBs with short spectral lags have higher peak luminosities. In general, type I GRBs deviate from the anti-correlation of type II GRBs, with type I GRBs tending to exhibit smaller spectral lags than type II GRBs at the same peak luminosity. The significant differences between the two in this regard make the peak luminosity versus spectral lag correlation a useful classification scheme. On the basis of the previous samples of type I and type II GRBs with known redshift<cit.>, we replot the L_γ,iso-τ_ z diagram, where L_γ,iso is isotropic peak luminosity and τ_ z=τ/(1+z) is the rest-frame spectral lag (Fig. <ref>c). The anti-correlation between L_γ,iso and τ_ z for type II GRBs can be fit with the linear model logL_γ,iso=b + k logτ_ z. The ODR method<cit.> gives the best-fitting parameters with 1σ uncertainties to be k=-0.94_-0.26^+0.07, b=54.24_-0.14^+0.49 and logσ_ int=-1.61_-0.17^+0.29. To maintain consistency with the calculation method of the sample, we estimate the rest-frame lag of GRB 230307A between rest-frame energy bands 100–150 and 200–250 keV to be 1.6_-1.2^+1.4 ms. Assuming the redshift to be 0.065, the isotropic peak luminosity of GRB 230307A can be calculated as L_γ,iso=4.89_-0.13^+0.09× 10^51 erg based on the spectral fits to GECAM S-II. Then we place both GRB 230307A and GRB 211211A on the L_γ,iso-τ_ z diagram. As can be seen from Fig. <ref>c, their locations are more consistent with that of type I GRBs despite being long-duration GRBs. We also note that, if the redshift is 3.87<cit.>, the extremely high peak luminosity of GRB 230307A would make it a significant outlier in the known sample of GRBs. §.§ Fit of multi-wavelength flux light curves On the basis of detailed spectral fitting to the time-resolved spectra of LEIA S-II and GECAM S-II, we calculate the energy flux for each time slice and construct the multi-wavelength flux light curves for six energy bands. These six energy bands, including 0.5–4, 15–30, 30–100, 100–350, 350–700, and 700–2000 keV, are set by referencing the effective energy ranges of the LEIA, GECAM-B, and GECAM-C. It is noteworthy that, during the last three time slices of GECAM S-II, only the high gain data from GECAM-C and GECAM-B exhibit effective spectra that are significantly above the background level, while the low gain data from GECAM-B are already close to the background level and do not provide effective spectral information, which leads to insufficient confidence in determining flux values in the energy bands above 350 keV. Therefore, based on the Bayesian posterior probability distribution generated by Multinest<cit.>, we provide the 3σ upper limits of flux for 350–700 and 700–2000 keV in the last three time intervals. The multi-wavelength flux light curves are represented in Fig. <ref>a. All these flux light curves display multi-segment broken power law features. To fit these features, we introduce multi-segment smoothly broken power law (SBPL) functions. In general, smoothly connected functions in a logarithmic-logarithmic scale can be expressed as F=(F_ l^-ω + F_ r^-ω)^-1/ω, where F_ l and F_ r are the functions located on the left and right sides respectively, and ω describes the smoothness. When both of F_ l and F_ r are power law functions, a two-segment SBPL function can be obtained as F_12=(F_1^-ω_1 + F_2^-ω_1)^-1/ω_1, where F_1=A(t/t_ b1)^-α_1, F_2=A(t/t_ b1)^-α_2. The power law slopes before and after the break time t_ b1 are α_1 and α_2, respectively, and A is the normalization coefficient at t_ b1. In the case where another break occurs after two-segment SBPL function, a three-segment SBPL function can be expressed as F_123=(F_12^-ω_2 + F_3^-ω_2)^-1/ω_2, where F_3=F_12(t_ b2)(t/t_ b2)^-α_3 describes the third power law function. By extension, we can further expand the three-segment SBPL to include a third break and a forth power law function, namely the four-segment SBPL function: F_1234=(F_123^-ω_3 + F_4^-ω_3)^-1/ω_3, where F_4=F_123(t_ b3)(t/t_ b3)^-α_4 describes the forth power law function. The fitting process is implemented by the Python module PyMultinest<cit.>, a Python interface to the widely-used Fortran nested sampling implementation Multinest<cit.>. We employ χ^2 as the statistical metric to evaluate the likelihood. It should be noted that a dip covering 17–20 s exists in the flux light curves of all energy bands of GECAM. Since the dip is an additional component superimposed on the multi-segment broken power law, the time interval containing the dip is omitted from the fitting procedure and will be examined in a separate analysis, which will be discussed in detail elsewhere. For convenience, we refer to the flux light curves of the six energy bands as LF (0.5–4 keV) and GFi (where i ranges from 1 to 5, representing the five energy bands of GECAM from low to high). We note that LF consists of a shallow power law decay followed by a steeper decline, which is significantly different from GFs, where they include an initial power law rise and several gradually steepening power law decay phases. GF1-3 require the four-segment SBPL functions to describe their features, while GF4 and GF5 can be described using three-segment SBPL functions as the late-time features cannot be constrained due to the last three data points being 3σ upper limits. Interestingly, in the GECAM energy bands, the first three power law segments and the corresponding two break times exhibit clear spectral evolution features. From low to high energy, the power law decay index gradually increases, while the break times gradually shift to earlier times. However, the final breaks in GF1-3 appear to be a simultaneous feature. Such an achromatic break is typically attributed to the geometric effects of the emission region. To test the simultaneity of the final break and determine its break time, we performed a joint fit to GF1-3. The fitting process can be described as follows. We simultaneously use three four-segment SBPL functions to fit GF1, GF2, and GF3 independently but allow the three t_ b3 parameters to degenerate into a common parameter. The statistic of the joint fit is the sum of χ^2 values for GF1-3. The fitting process is also implemented by the Python module PyMultinest<cit.>. The best-fitting parameter values and their 1σ uncertainties are presented in Extended Data Table <ref>. Extended Data Fig. <ref> exhibits the corresponding corner plot of posterior probability distributions of the parameters for the joint fit, where all the parameters are well constrained, and the common parameter t_ b3 is also well constrained to be 84.05_-2.19^+1.66 s. §.§ Curvature effect The “curvature effect” refers to the phenomenon of photons arriving progressively later at the observer from higher latitudes with respect to the line of sight<cit.>. It has been proposed that this effect plays an important role in shaping the decay phase of the light curve after the sudden cessation of the GRB's emitting shell<cit.>. By assuming a power law spectrum with spectral index β̂ for the GRB, the most straightforward relation of the curvature effect can be given as α̂=2+β̂<cit.>, where α̂ and β̂ are the temporal decay index and spectral index in the convention F_ν∝ t^-α̂ν^-β̂, respectively. If the aforementioned assumptions are released, and the intrinsically curved spectral shape and strong spectral evolution are taken into account, the above relationship is no longer applicable<cit.>. Nevertheless, for a narrow energy band, the intrinsically curved spectrum can be approximated, on average, by a power law, and the time-dependent α̂ and β̂ still approximately follow the relation α̂(t)=2+β̂(t)<cit.>. The multi-wavelength flux light curves of GRB 230307A exhibit an initial rise and subsequent power law decay phases that gradually become steeper. To test whether the decay phases are dominated by the curvature effect, we compare the time-dependent α̂(t) and 2+β̂(t) in each narrow energy band. Here, α̂(t) is obtained by numerically calculating -Δ logF/Δ logt of the best-fitting SBPL functions, while β̂(t) is the average spectral index -Δ logF_ν/Δ logE calculated in the corresponding narrow energy band based on the spectral fitting results for each time slice of LEIA S-II or GECAM S-II. Fig. <ref>c displays the time-dependent α̂(t) and 2+β̂(t) for each energy band. We note that the power law decay indices of the segments between the second and third breaks (see t_ b2 and t_ b3 in Extended Data Table <ref>) of GECAM multi-wavelength flux light curves are consistent with the prediction of curvature effect, implying that the jet's emitting shell stops shining at t_ b2 (∼ 20 s) and then high-latitude emission dominates the prompt emission. On the contrary, LEIA flux light curve is in a shallower decay phase, with a decay index much lower than the prediction of curvature effect. Such behavior suggests that the soft X-ray emission detected by LEIA is intrinsic to the central engine, not related to the narrow jet but consistent with the dipole radiation of the magnetar. §.§ Explanation of multi-wavelength flux light curves We conduct a smoothly broken power law fit to the multi-wavelength flux light curves (Fig. <ref>a) and explain each of the segments using our schematic model depicted in Extended Data Fig. <ref>. The jet and magnetar-powered emission processes are displayed separately. The emission of the jet-dominated GRB component undergoes a rise followed by a general trend of decline contributed by photons within the jet core of θ_c = 1/ Γ, where Γ is the Lorentz factor. After the emitting shell terminates, the emission is contributed by photons from higher latitude regions with respect to the line of sight of the observer. The best-fit temporal slopes during this phase (t_ b2–t_ b3 in Extended Date Table <ref>) are consistent with prediction of the high-latitude curvature effect relation α̂(t)=2+β̂(t) (Fig. <ref>c). Following the progressive decrease of the high-latitude emission, an achromatic break occurs, signaling the end of the curvature effect. This enables us to estimate the jet opening angle, for the first time, during the prompt emission phase of a GRB. Finally, the light curve drops with an even steeper slope, possibly contributed by some weak emission from regions beyond the opening angle of the jet that is likely to have no sharp edges. In contrast, the soft X-ray emission is powered by a presumably more isotropic magnetar wind. The light curve is dominated by the spin-down law with a plateau followed by a shallow decline. §.§ Estimation of jet opening angle The high-latitude emission effect is observed in the multi-wavelength flux light curves of GECAM, starting from the second break (t_ b2) and continuing until the third break (t_ b3), as depicted in Fig. <ref> (see also Extended Data Table <ref>). The third break signifies the moment when photons from the outermost layer of the shell, with a radius R_ GRB, reach the observer. The duration of such tail emission can be described by the following relationship<cit.>: Δ t_ b = t_ b3 - t_ b2 = (1+z) (R_ GRB/c) (θ_j^2/2), where θ_j represents the half opening angle of the jet. By substituting t_ b3 = 84.05 s, t_ b2 = 22.18 s, and assuming a typical radius of R_ GRB = 10^15 cm, we can calculate the opening angle of the jet as follows: θ_j = √(2cΔ t_ b/(1+z)R_ GRB)≈ 3.4^∘(Δ t_ b/62 s)^1/2(R_ GRB/10^15 cm)^-1/2. §.§ Host galaxy We conducted a search for the host galaxy of GRB 230307A in various public galaxy catalogs and GCN circulars. We estimated the chance coincidence probability P_ cc<cit.> for the most promising candidates, which are listed below. * Large Magellanic Cloud (LMC): The LMC is the nearest and brightest galaxy to the Milky Way, located at a distance of 49 kpc. GRB 230307A is 8.15 degrees away from the center of the LMC, corresponding to a physical separation of 7 kpc, and it is situated on the edge of the Magellanic Bridge. The surface density of galaxies, typically used for estimating chance coincidence probabilities, is not applicable to the LMC due to its brightness. Instead, we consider a surface density of galaxies as bright as the LMC, which is σ=1/41252.96 deg^-2. Using a half-light radius of r_50=2.2 deg<cit.>, we estimate the chance coincidence probability P_ cc to be 0.006. However, the energy of the GRB is relatively low, around ∼ 9 × 10^44 erg, which is inconsistent with any known transients. Additionally, it is unlikely for a transient with such low energy to produce gamma-ray photons. Therefore, we consider this scenario to be less likely. * Galaxy with a redshift of z=3.87: According to Ref.<cit.>, there is a faint galaxy located 0.2 arcsec away from GRB 230307A with a redshift of z=3.87. We analyzed the JWST/NIRCam images using the official STScI JWST Calibration Pipeline version 1.9.0 and found the galaxy to be fainter than 28.5 mag in the JWST/F070W bands and 27.4 mag in the JWST/F277W band. The estimated P_ cc using the JWST/F277W magnitude is 0.034, while for the JWST/F070W band, it is estimated to be greater than 0.09. Since the F070W band is closer to the r band, with which the galaxy brightness distribution is produced, the latter estimate is considered more reliable. However, the extremely high energy of the GRB and its inconsistency with known GRB transients in Fig. <ref>b and <ref>c make this scenario less favored. * Galaxy with a redshift of z=0.065<cit.>: With the half-light radius and r band magnitude from DESI Legacy Survey<cit.>[https://www.legacysurvey.org/dr10/catalogs/], the chance coincidence probability for this galaxy was estimated to be 0.11 (Table <ref>). Although not statistically significant, the redshift is more consistent with the physical properties of the GRB. Therefore, we consider this galaxy to be the most likely host. The offset of GRB 230307A from the host galaxy is 29.4", which is 36.6 kpc in redshift 0.065. As presented in Fig. <ref>d, the offset is consistent with those of type I GRB, and larger than type II GRBs. Moreover, we explore other possible host galaxies in DESI Legacy Survey. For objects within 5 arcmin of GRB 230307A, we exclude stars with detected parallax in Gaia, and then calculate the chance coincidence probability with the half-light radius and r band magnitude in the catalog. It turns out that the galaxy with a redshift of z=0.065 has the lowest P_ cc, while others have P_ cc > 0.2 or more. §.§ Magnetar dipole radiation We fit the LEIA (0.5–4 keV) light curve of GRB 230307A in the prompt emission phase with both the smoothly broken power law model (Eq. <ref>) and the magnetar dipole radiation model. Considering a millisecond magnetar with rigid rotation, it loses its rotational energy through both magnetic dipole radiation and quadrapole radiation <cit.>, with Ė=IΩΩ̇=-B_ p^2R^6Ω^4/6c^3-32GI^2 ϵ ^2 Ω^6/5c^5, where Ė is the total spin-down rate, Ω=2π/P is the angular frequency and Ω̇ its time derivative, I is the moment of inertia (= 3.33 × 10^45g cm^-2), B_ p is the dipolar field strength at the magnetic poles on the NS surface, R is the NS radius (= 1.2 × 10^6 cm), and ϵ is the ellipticity of the NS (= 10^-4). The electromagnetic emission is determined by the dipole spin-down luminosity L_ sd, i.e., L_ X(t)=η L_ sd = η B_ p^2 R^6 Ω^4(t)/6c^3, where η is the efficiency of converting the dipole spin-down luminosity to the X-ray luminosity. The X-ray luminosity is derived from the observation assuming isotropic emission. In a more realistic situation, the X-ray emission may not be isotropic, and a factor f_ b (assumed to be 0.1) is introduced to account for this correction between the isotropic and true luminosity L_ X, i.e., L_ iso(t) = L_ X(t)/f_ b =η/f_ bB_ p^2 R^6 Ω^4(t)/6c^3. We model the unabsorbed luminosity light curves based on Eqs. <ref> and <ref>. The fitting process is performed by using the Markov Chain Monte Carlo code emcee<cit.>. We employ χ^2 as the statistical metric to evaluate the likelihood. The prior bounds for the free parameters (log(B_ p/ G), P_0/ ms, logη) are set to be (15, 17), (1, 5), and (-3, -2), respectively. The first data point is excluded, as at this moment the light curve is still in the rising phase, and has not reached the main plateau emission yet. Theoretically it is also predicted that it takes seconds for the proto-neutron star to cool down<cit.>. The fitting results are shown in Extended Data Fig. <ref>. Fast X-ray transients with light curves characteristic of spin-down magnetars have been identified previously in the afterglow of some short GRBs, as well as in a few events without associated GRB such as CDF-S XT2, which are thought to be of compact-star merger origin. We compare the X-ray luminosity light curve of GRB 230307A with internal plateaus in the X-ray afterglows of short GRBs with redshifts and in CDF-S XT2. The afterglow data are retrieved from the XRT lightcurve repository<cit.> and corrected to the 0.5 - 4keV band assuming an absorbed power law spectrum. Such a correction is also made for the luminosity of CDF-S XT2, using power law indices Γ_1 = 1.45 and Γ_2 = 2.67 before and after the break time at 2.3 ks<cit.>. § DATA AVAILABILITY The processed data are presented in the tables and figures of the paper, which are available upon reasonable request. The authors point out that some data used in the paper are publicly available, whether through the UK Swift Science Data Centre website, JWST website, or GCN circulars. § CODE AVAILABILITY Upon reasonable requests, the code (mostly in Python) used to produce the results and figures will be provided. 10 url<#>1 urlprefixURL [35]Fermi2023GCN33407authorFermi GBM Team. titleGRB 230307A: Fermi GBM detection of a very bright GRB. journalGRB Coordinates Networkvolume33407, pages1 (year2023). [36]Fermi2023GCN33414authorFermi GBM Team. titleGRB 230307A: possibly the second highest GRB energy fluence ever identified. journalGRB Coordinates Networkvolume33414, pages1 (year2023). [37]Konus2023GCNauthorSvinkin, D.et al.titleKonus-Wind detection of GRB 230307A. journalGRB Coordinates Networkvolume33427, pages1 (year2023). [38]Kozyrev2023GCNauthorKozyrev, A. S.et al.titleFurther improved IPN localization for GRB 230307A. journalGRB Coordinates Networkvolume33461, pages1 (year2023). [39]Evans2023GCNauthorEvans, P. A.&authorSwift Team. titleGRB 230307A: Tiled Swift observations. journalGRB Coordinates Networkvolume33419, pages1 (year2023). [40]Burrows2023GCNauthorBurrows, D. N.et al.titleGRB 230307A: Swift-XRT afterglow detection. journalGRB Coordinates Networkvolume33465, pages1 (year2023). [41]O'Connor2023GCNauthorO'Connor, B.et al.titleGRB 230307A: Gemini-South Confirmation of the Optical Afterglow. journalGRB Coordinates Networkvolume33447, pages1 (year2023). [42]Gillanders2023GCNauthorGillanders, J., authorO'Connor, B., authorDichiara, S.&authorTroja, E.titleGRB 230307A: Continued Gemini-South observations confirm rapid optical fading. journalGRB Coordinates Networkvolume33485, pages1 (year2023). [43]Levan2023GCN33569authorLevan, A. J.et al.titleGRB 230307A: JWST observations consistent with the presence of a kilonova. journalGRB Coordinates Networkvolume33569, pages1 (year2023). [44]Levan2023GCN33580authorLevan, A. J.et al.titleGRB 230307A: JWST NIRSpec observations, possible higher redshift. journalGRB Coordinates Networkvolume33580, pages1 (year2023). [45]Levan2023GCN33747authorLevan, A. J.et al.titleGRB 230307A: JWST second-epoch observations. journalGRB Coordinates Networkvolume33747, pages1 (year2023). [46]Zhao2021arxivauthorZhao, X.-Y.et al.titleThe In-Flight Realtime Trigger and Localization Software of GECAM. journalarXiv e-printspagesarXiv:2112.05101 (year2021). [47]BDS2023IEEEauthorGuo, S.et al.titleIntegrated navigation and communication service for LEO satellites based on BDS-3 global short message communication. journalIEEE Accessvolume11, pages6623–6631 (year2023). [48]An2021RDTMauthorAn, Z.et al.titleThe design and performance of GRD onboard the GECAM satellite. journalRadiation Detection Technology and Methodsvolume6, pages43–52 (year2021). [49]Zheng2022NIMAauthorZheng, C.et al.titleElectron non-linear light yield of LaBr_3 detector aboard GECAM. journalNuclear Instruments and Methods in Physics Research Avolume1042, pages167427 (year2022). [50]Zheng2023arXivauthorZheng, C.et al.titleGround calibration of Gamma-Ray Detectors of GECAM-C. journalarXiv e-printspagesarXiv:2303.00687 (year2023). [51]ZhangYQ2023arXivauthorZhang, Y.-Q.et al.titleCross calibration of gamma-ray detectors (GRD) of GECAM-C. journalarXiv e-printspagesarXiv:2303.00698 (year2023). [52]Fermi2023GCN33551authorFermi GBM Team. titleGRB 230307A: Bad Time Intervals for Fermi GBM data. journalGRB Coordinates Networkvolume33551, pages1 (year2023). [53]liu2021arxivauthorLiu, Y. Q.et al.titleThe SiPM Array Data Acquisition Algorithm Applied to the GECAM Satellite Payload. journalarXiv e-printspagesarXiv:2112.04786 (year2021). [54]Golkhou2015ApJauthorGolkhou, V. Z., authorButler, N. R.&authorLittlejohns, O. M.titleThe Energy Dependence of GRB Minimum Variability Timescales. journalvolume811, pages93 (year2015). [55]Camisasca2023A AauthorCamisasca, A. E.et al.titleGRB minimum variability timescale with Insight-HXMT and Swift. Implications for progenitor models, dissipation physics, and GRB classifications. journalvolume671, pagesA112 (year2023). [56]Scargle2013ApJauthorScargle, J. D., authorNorris, J. P., authorJackson, B.&authorChiang, J.titleStudies in Astronomical Time Series Analysis. VI. Bayesian Block Representations. journalvolume764, pages167 (year2013). [57]Yi2006MNRASauthorYi, T., authorLiang, E., authorQin, Y.&authorLu, R.titleOn the spectral lags of the short gamma-ray bursts. journalvolume367, pages1751–1756 (year2006). [58]Bernardini2015MNRASauthorBernardini, M. G.et al.titleComparing the spectral lag of short and long gamma-ray bursts and its relation with the luminosity. journalvolume446, pages1129–1138 (year2015). [59]Norris2000ApJauthorNorris, J. P., authorMarani, G. F.&authorBonnell, J. T.titleConnection between Energy-dependent Lags and Peak Luminosity in Gamma-Ray Bursts. journalvolume534, pages248–257 (year2000). [60]Ukwatta2012MNRASauthorUkwatta, T. N.et al.titleThe lag-luminosity relation in the GRB source frame: an investigation with Swift BAT bursts. journalvolume419, pages614–623 (year2012). [61]Zhang2012ApJauthorZhang, B.-B.et al.titleUnusual Central Engine Activity in the Double Burst GRB 110709B. journalvolume748, pages132 (year2012). [62]Arnaud1996ASPCauthorArnaud, K. A.titleXSPEC: The First Ten Years. In editorJacoby, G. H.&editorBarnes, J. (eds.) booktitleAstronomical Data Analysis Software and Systems V, vol. volume101 of seriesAstronomical Society of the Pacific Conference Series, pages17 (year1996). [63]Cash1979ApJauthorCash, W.titleParameter estimation in astronomy through application of the likelihood ratio.journalvolume228, pages939–947 (year1979). [64]HI4PI2016A AauthorHI4PI Collaborationet al.titleHI4PI: A full-sky H I survey based on EBHIS and GASS. journalvolume594, pagesA116 (year2016). [65]Yang2023ApJLauthorYang, J.et al.titleSynchrotron Radiation Dominates the Extremely Bright GRB 221009A. journalvolume947, pagesL11 (year2023). [66]Feroz2008MNRASauthorFeroz, F.&authorHobson, M. P.titleMultimodal nested sampling: an efficient and robust alternative to Markov Chain Monte Carlo methods for astronomical data analyses. journalvolume384, pages449–463 (year2008). [67]Feroz2009MNRASauthorFeroz, F., authorHobson, M. P.&authorBridges, M.titleMULTINEST: an efficient and robust Bayesian inference tool for cosmology and particle physics. journalvolume398, pages1601–1614 (year2009). [68]Buchner2014A AauthorBuchner, J.et al.titleX-ray spectral modelling of the AGN obscuring region in the CDFS: Bayesian model selection and catalogue. journalvolume564, pagesA125 (year2014). [69]Feroz2019OJApauthorFeroz, F., authorHobson, M. P., authorCameron, E.&authorPettitt, A. N.titleImportance Nested Sampling and the MultiNest Algorithm. journalThe Open Journal of Astrophysicsvolume2, pages10 (year2019). [70]Zheng2012ApJauthorZheng, W.et al.titlePanchromatic Observations of the Textbook GRB 110205A: Constraining Physical Mechanisms of Prompt Emission and Afterglow. journalvolume751, pages90 (year2012). [71]Schwarz1978AnStaauthorSchwarz, G.titleEstimating the Dimension of a Model. journalAnnals of Statisticsvolume6, pages461–464 (year1978). [72]Amati2002A AauthorAmati, L.et al.titleIntrinsic spectra and energetics of BeppoSAX Gamma-Ray Bursts with known redshifts. journalvolume390, pages81–89 (year2002). [73]Minaev2020MNRASauthorMinaev, P. Y.&authorPozanenko, A. S.titleThe E_p,I-E_iso correlation: type I gamma-ray bursts and the new classification method. journalvolume492, pages1919–1936 (year2020). [74]Foreman2013PASPauthorForeman-Mackey, D., authorHogg, D. W., authorLang, D.&authorGoodman, J.titleemcee: The MCMC Hammer. journalvolume125, pages306 (year2013). [75]Lelli2019MNRASauthorLelli, F., authorMcGaugh, S. S., authorSchombert, J. M., authorDesmond, H.&authorKatz, H.titleThe baryonic Tully-Fisher relation for different velocity definitions and implications for galaxy angular momentum. journalvolume484, pages3267–3278 (year2019). [76]Ukwatta2010ApJauthorUkwatta, T. N.et al.titleSpectral Lags and the Lag-Luminosity Relation: An Investigation with Swift BAT Gamma-ray Bursts. journalvolume711, pages1073–1086 (year2010). [77]Xiao2022ApJauthorXiao, S.et al.titleA Robust Estimation of Lorentz Invariance Violation and Intrinsic Spectral Lag of Short Gamma-Ray Bursts. journalvolume924, pagesL29 (year2022). [78]Dermer2004ApJauthorDermer, C. D.titleCurvature Effects in Gamma-Ray Burst Colliding Shells. journalvolume614, pages284–292 (year2004). [79]Zhang2006ApJauthorZhang, B.et al.titlePhysical Processes Shaping Gamma-Ray Burst X-Ray Afterglow Light Curves: Theoretical Implications from the Swift X-Ray Telescope Observations. journalvolume642, pages354–370 (year2006). [80]Liang2006ApJauthorLiang, E. W.et al.titleTesting the Curvature Effect and Internal Origin of Gamma-Ray Burst Prompt Emissions and X-Ray Flares with Swift Data. journalvolume646, pages351–357 (year2006). [81]Uhm2015ApJauthorUhm, Z. L.&authorZhang, B.titleOn the Curvature Effect of a Relativistic Spherical Shell. journalvolume808, pages33 (year2015). [82]ZhangBB2007ApJauthorZhang, B.-B., authorLiang, E.-W.&authorZhang, B.titleA Comprehensive Analysis of Swift XRT Data. I. Apparent Spectral Evolution of Gamma-Ray Burst X-Ray Tails. journalvolume666, pages1002–1011 (year2007). [83]ZhangBB2009ApJauthorZhang, B.-B., authorZhang, B., authorLiang, E.-W.&authorWang, X.-Y.titleCurvature Effect of a Non-Power-Law Spectrum and Spectral Evolution of GRB X-Ray Tails. journalvolume690, pagesL10–L13 (year2009). [84]Bloom2002AJauthorBloom, J. S., authorKulkarni, S. R.&authorDjorgovski, S. G.titleThe Observed Offset Distribution of Gamma-Ray Bursts from Their Host Galaxies: A Robust Clue to the Nature of the Progenitors. journalvolume123, pages1111–1148 (year2002). [85]Corwin1994AJauthorCorwin, J., Harold G., authorButa, R. J.&authorde Vaucouleurs, G.titleCorrections and additions to the Third Reference Catalogue of Bright Galaxies.journalvolume108, pages2128–2144 (year1994). [86]GCN33485authorGillanders, J., authorO'Connor, B., authorDichiara, S.&authorTroja, E.titleGRB 230307A: Continued Gemini-South observations confirm rapid optical fading. journalGRB Coordinates Networkvolume33485, pages1 (year2023). [87]Dey2019AJauthorDey, A.et al.titleOverview of the DESI Legacy Imaging Surveys. journalvolume157, pages168 (year2019). [88]Shapiro1983authorShapiro, S. L.&authorTeukolsky, S. A.titleBlack holes, white dwarfs and neutron stars. The physics of compact objects (year1983). [89]Evans2007A AauthorEvans, P. A.et al.titleAn online repository of Swift/XRT light curves of -ray bursts. journalvolume469, pages379–385 (year2007). [90]Evans2009MNRASauthorEvans, P. A.et al.titleMethods and results of an automatic analysis of a complete sample of Swift-XRT observations of GRBs. journalvolume397, pages1177–1201 (year2009). Acknowledgments This work is supported by the National Key Research and Development Programs of China (2022YFF0711404, 2021YFA0718500, 2022SKA0130102, 2022SKA0130100). LEIA is a pathfinder of the Einstein Probe mission, which is supported by the Strategic Priority Program on Space Science of CAS (grant Nos. XDA15310000, XDA15052100). The GECAM (Huairou-1) mission is supported by the Strategic Priority Research Program on Space Science (Grant No. XDA15360000, XDA15360102, XDA15360300, XDA15052700) of CAS. We acknowledge the support by the National Natural Science Foundation of China (Grant Nos. 11833003, U2038105, 12121003, 12173055, 11922301, 12041306, 12103089, 12203071, 12103065, 12273042, 12173038, 12173056), the science research grants from the China Manned Space Project with NO.CMS-CSST-2021-B11, the Natural Science Foundation of Jiangsu Province (Grant No. BK20211000), International Partnership Program of Chinese Academy of Sciences for Grand Challenges (114332KYSB20210018), the Major Science and Technology Project of Qinghai Province (2019-ZJ-A10), the Youth Innovation Promotion Association of the Chinese Academy of Sciences, the Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX23_0117), the Program for Innovative Talents, Entrepreneur in Jiangsu, and the International Partnership Program of Chinese Academy of Sciences (Grant No.113111KYSB20190020). We thank Y.-H. Yang, E. Troja, Y.-Z. Meng, Rahim Moradi, and Z.-G. Dai for helpful discussions. Author Contributions H.S., S.-L.X. and B.-B.Z. initiated the study. H.S., B.-B.Z., B.Z., J.Y. and S.-L.X. coordinated the scientific investigations of the event. C.-W.W., W.-C.X., J.Y., Y.-H.I.Y., W.-J.T., J.-C.L., Y.-Q.Z., C.Zheng, C.C., S.X., S.-L.X., S.-X.Y. and X.-L.W. processed and analysed the GECAM data. S.-L.X. first noticed the extremely brightness of GRB 230307A from GECAM data. H.S., Y.Liu., H.-W.P. and D.-Y.L. processed and analysed the LEIA data. Y.Liu. first identified GRB 230307A in LEIA data. J.Y., C.-W.W., Y.-H.I.Y., W.-C.X. and Y.-Q. Z. performed the spectral fitting of GECAM data. W.-C.X. and J.-C.L. performed background analysis for GECAM-C. C.Zheng and Y.-Q.Z. performed calibration analysis for GECAM. J.-C.L. performed the data saturation assessment for GECAM. H.S., Y.Liu. and J.Y. performed the spectral fitting of LEIA data. C.Z., Z.-X.L., Y.Liu., H.-Q.C. and D.-H.Z. contributed to the calibration of LEIA data. J.Y. and C.-W. W. contributed the Amati relation and luminosity-lag relation. Y.-H.I.Y., J.Y. and C.-W.W. performed the T_90 calculation. J.Y. calculated the amplitude parameter. J.Y., W.-C.X., W.-J.T., Y.-H.I.Y. and S.X. calculated the spectral lag. W.-J.T., W.-C.X. and S.X. performed the minimum variability timescale calculation. J.Y. and C.-W.W. fitted the multi-wavelength flux light curves. J.Y. calculated the curvature effect. J.Y., H.S., Y.Liu., C.-W.W. W.-C.X. and Y.-Q.Z. contributed to the SED modeling. J.Y. performed the global fitting to the achromatic break. C.-W.W. and Z.Y. performed the calculation of jet opening angle. H.S. and J.Y. performed the theoretical modelling with the magnetar dipole radiation model. J.-W.H. contributed to the luminosity correction of the X-ray afterglows. Y. Li, L.H. and J.Y. contributed to the information about the host galaxy. Z.-X.L., C.Z., X.-J.S., S.L.S., X.-F.Z., Y.-H.Z., and W.Y. contributed to the development of the LEIA instrument. Y.Liu., H.-Q.C., C.J., W.-D.Z., D.-Y.L., J.-W.H., H.-Y.L., H.S., H.-W.P. and M.L. contributed to the development of LEIA data analysis tools and LEIA operations. Z.-H.A., X.-Q.L., W.-X.P., L.-M.S., X.-Y.W., F.Z., S.-J.Z., C.C., S.X. and S.-L.X. contributed to the development and operation of GECAM. B.Z. led the theoretical investigation of the event. H.S., J.Y., B.-B.Z., B.Z., W.Y., S.-L.X., S.-N.Z., Y. Liu contributed to the interpretation of the observations and the writing of the manuscript with contributions from all authors. Competing Interests The authors declare that they have no competing financial interests. Additional information Correspondence and requests for materials should be addressed to B.-B.Z. ([email protected]), S.-L.X. ([email protected]), Z.-X.L. ([email protected]) and B.Z. ([email protected]). [table]name= Extended Data Table[figure]name= Extended Data Fig. * In cases where the 1σ lower limit of α falls below -2, we use cutoff energy as a substitute for peak energy. ccccccSpectral fitting results and corresponding fitting statistics for GECAM S-I and S-II spectra. All errors represent the 1σ uncertainties. t_1 t_2 α logE_ p logA PGSTAT/d.o.f (s) (s) (keV) (photons cm^-2 s^-1 keV^-1) 6rcontinued 0 140 -1.198_-0.002^+0.003 3.098_-0.006^+0.005 -0.697_-0.001^+0.001 2127.85/662 0.000 0.700 -0.93_-0.07^+0.05 2.13_-0.02^+0.02 -0.03_-0.04^+0.03 468.69/662 0.700 1.000 -0.43_-0.04^+0.05 2.72_-0.03^+0.03 0.07_-0.02^+0.02 448.21/662 1.000 1.400 -0.62_-0.04^+0.04 2.66_-0.02^+0.03 0.03_-0.02^+0.01 448.30/662 1.400 1.600 -0.39_-0.03^+0.03 2.99_-0.02^+0.02 0.13_-0.01^+0.01 561.77/662 1.600 1.800 -0.40_-0.04^+0.03 3.10_-0.02^+0.02 0.10_-0.01^+0.01 580.31/662 1.800 2.000 -0.26_-0.03^+0.03 3.10_-0.02^+0.02 0.11_-0.01^+0.01 647.78/662 2.000 2.300 -0.33_-0.03^+0.03 2.90_-0.02^+0.02 0.11_-0.01^+0.01 561.17/662 2.300 2.700 -0.43_-0.03^+0.03 2.86_-0.02^+0.02 0.08_-0.01^+0.01 603.89/662 2.700 3.200 -0.68_-0.03^+0.03 2.81_-0.03^+0.02 -0.02_-0.01^+0.01 612.16/662 3.200 3.400 -0.40_-0.02^+0.03 3.09_-0.02^+0.01 0.29_-0.01^+0.01 614.24/662 3.400 3.600 -0.42_-0.03^+0.03 3.05_-0.02^+0.02 0.16_-0.01^+0.01 592.26/662 3.600 4.000 -0.56_-0.02^+0.03 2.81_-0.02^+0.02 0.06_-0.01^+0.01 536.76/662 4.000 4.300 -0.62_-0.03^+0.03 3.02_-0.02^+0.02 0.00_-0.01^+0.01 588.84/662 4.300 4.700 -0.67_-0.03^+0.03 2.87_-0.02^+0.02 0.06_-0.01^+0.01 540.47/662 4.700 4.900 -0.56_-0.03^+0.03 2.97_-0.02^+0.02 0.22_-0.01^+0.01 559.74/662 4.900 5.035 -0.59_-0.04^+0.02 2.85_-0.02^+0.03 0.32_-0.01^+0.01 515.88/662 5.035 5.200 -0.70_-0.02^+0.02 3.03_-0.02^+0.02 0.30_-0.01^+0.01 568.22/662 5.200 5.400 -0.62_-0.03^+0.02 2.96_-0.02^+0.02 0.33_-0.01^+0.01 535.36/662 5.400 5.700 -0.73_-0.02^+0.02 2.98_-0.02^+0.02 0.19_-0.01^+0.01 546.81/662 5.700 5.830 -0.60_-0.03^+0.03 3.08_-0.02^+0.02 0.25_-0.01^+0.01 552.10/662 5.830 5.940 -0.45_-0.04^+0.04 3.10_-0.02^+0.02 0.30_-0.01^+0.01 598.12/662 5.940 6.050 -0.51_-0.03^+0.03 3.03_-0.02^+0.02 0.35_-0.01^+0.01 543.25/662 6.050 6.200 -0.53_-0.03^+0.03 3.01_-0.02^+0.02 0.36_-0.01^+0.01 630.36/662 6.200 6.300 -0.49_-0.03^+0.03 3.06_-0.02^+0.02 0.43_-0.01^+0.01 612.34/662 6.300 6.415 -0.55_-0.03^+0.03 3.04_-0.02^+0.02 0.34_-0.01^+0.01 555.81/662 6.415 6.600 -0.62_-0.02^+0.03 3.11_-0.02^+0.02 0.29_-0.01^+0.01 696.62/662 6.600 6.700 -0.48_-0.04^+0.03 3.06_-0.02^+0.02 0.38_-0.01^+0.01 579.16/662 6.700 6.900 -0.58_-0.02^+0.02 2.97_-0.02^+0.02 0.34_-0.01^+0.01 583.81/662 6.900 7.200 -0.77_-0.03^+0.03 2.83_-0.02^+0.03 0.10_-0.01^+0.01 521.23/662 7.200 7.400 -0.69_-0.03^+0.03 2.89_-0.02^+0.03 0.22_-0.01^+0.01 583.81/662 7.400 7.700 -0.84_-0.04^+0.03 2.69_-0.03^+0.03 0.15_-0.02^+0.02 523.88/662 7.700 7.900 -0.85_-0.03^+0.03 2.81_-0.03^+0.03 0.22_-0.01^+0.01 571.07/662 7.900 8.100 -0.73_-0.02^+0.02 3.00_-0.02^+0.02 0.25_-0.01^+0.01 596.95/662 8.100 8.300 -0.70_-0.02^+0.03 3.07_-0.02^+0.02 0.27_-0.01^+0.01 581.17/662 8.300 8.445 -0.71_-0.03^+0.03 3.07_-0.03^+0.03 0.24_-0.01^+0.01 590.15/662 8.445 8.600 -0.71_-0.02^+0.03 3.14_-0.03^+0.02 0.27_-0.01^+0.01 596.36/662 8.600 8.900 -0.70_-0.02^+0.03 2.89_-0.02^+0.02 0.20_-0.01^+0.01 583.91/662 8.900 9.100 -0.73_-0.03^+0.03 2.90_-0.03^+0.03 0.14_-0.01^+0.01 509.65/662 9.100 9.315 -0.80_-0.04^+0.03 2.83_-0.03^+0.03 0.14_-0.01^+0.01 520.01/662 9.315 9.500 -0.68_-0.03^+0.03 2.82_-0.02^+0.02 0.29_-0.01^+0.01 561.81/662 9.500 9.695 -0.86_-0.03^+0.03 3.01_-0.03^+0.03 0.14_-0.01^+0.01 612.53/662 9.695 10.000 -0.83_-0.02^+0.02 3.05_-0.03^+0.03 0.05_-0.01^+0.01 534.35/662 10.000 10.300 -0.74_-0.02^+0.02 3.04_-0.02^+0.02 0.20_-0.01^+0.01 638.45/662 10.300 10.500 -0.74_-0.03^+0.03 2.91_-0.03^+0.03 0.15_-0.01^+0.01 550.87/662 10.500 10.700 -0.77_-0.02^+0.03 2.89_-0.03^+0.02 0.23_-0.01^+0.01 606.41/662 10.700 10.905 -0.77_-0.03^+0.04 2.83_-0.03^+0.03 0.15_-0.01^+0.01 481.37/662 10.905 11.200 -0.83_-0.04^+0.03 2.68_-0.03^+0.04 0.14_-0.02^+0.02 569.07/662 11.200 11.400 -0.81_-0.03^+0.03 2.80_-0.03^+0.03 0.21_-0.01^+0.01 535.59/662 11.400 11.700 -0.93_-0.03^+0.04 2.65_-0.04^+0.04 0.12_-0.02^+0.02 550.06/662 11.700 12.000 -1.01_-0.02^+0.02 2.88_-0.03^+0.03 0.11_-0.01^+0.01 611.71/662 12.000 12.245 -0.97_-0.03^+0.03 2.84_-0.04^+0.03 0.07_-0.01^+0.02 518.67/662 12.245 12.600 -0.98_-0.02^+0.02 2.88_-0.04^+0.03 0.08_-0.01^+0.01 581.00/662 12.600 12.900 -0.92_-0.03^+0.02 2.84_-0.03^+0.03 0.15_-0.01^+0.01 651.58/662 12.900 13.100 -0.91_-0.03^+0.03 2.72_-0.03^+0.03 0.32_-0.01^+0.01 561.25/662 13.100 13.300 -0.94_-0.03^+0.03 2.79_-0.04^+0.03 0.27_-0.01^+0.01 562.85/662 13.300 13.505 -1.05_-0.03^+0.04 2.80_-0.05^+0.04 0.15_-0.01^+0.02 529.89/662 13.505 14.000 -1.05_-0.03^+0.03 2.67_-0.04^+0.03 0.04_-0.01^+0.01 578.25/662 14.000 14.400 -1.08_-0.02^+0.03 2.85_-0.03^+0.03 0.08_-0.01^+0.01 660.04/662 14.400 14.900 -1.02_-0.04^+0.03 2.41_-0.02^+0.03 0.13_-0.02^+0.02 530.75/662 14.900 15.300 -1.07_-0.02^+0.04 2.66_-0.04^+0.03 0.12_-0.01^+0.02 497.61/662 15.300 15.900 -1.21_-0.02^+0.03 2.67_-0.04^+0.05 -0.01_-0.01^+0.01 594.82/662 15.900 16.600 -1.17_-0.02^+0.02 2.83_-0.03^+0.04 -0.10_-0.01^+0.01 534.37/662 16.600 17.400 -1.25_-0.03^+0.03 2.57_-0.04^+0.04 -0.14_-0.02^+0.01 482.42/662 17.400 18.600 -1.43_-0.04^+0.03 2.35_-0.04^+0.06 -0.50_-0.03^+0.02 498.41/662 18.600 19.300 -1.30_-0.04^+0.04 2.35_-0.04^+0.05 -0.24_-0.03^+0.02 522.11/662 19.300 19.900 -1.15_-0.04^+0.03 2.45_-0.03^+0.04 -0.03_-0.02^+0.02 543.22/662 19.900 20.400 -1.22_-0.03^+0.04 2.65_-0.05^+0.05 -0.05_-0.01^+0.02 584.43/662 20.400 21.100 -1.21_-0.03^+0.02 2.68_-0.03^+0.05 -0.19_-0.02^+0.01 533.13/662 21.100 21.500 -1.16_-0.03^+0.03 2.69_-0.05^+0.05 -0.05_-0.02^+0.02 509.46/662 21.500 22.100 -1.20_-0.03^+0.03 2.55_-0.04^+0.05 -0.12_-0.02^+0.02 500.03/662 22.100 22.900 -1.25_-0.03^+0.04 2.47_-0.04^+0.04 -0.21_-0.02^+0.02 561.92/662 22.900 23.500 -1.08_-0.05^+0.05 2.25_-0.02^+0.03 -0.09_-0.03^+0.03 445.08/662 23.500 24.300 -1.30_-0.04^+0.03 2.38_-0.03^+0.05 -0.16_-0.02^+0.02 619.11/662 24.300 25.000 -1.30_-0.03^+0.03 2.56_-0.04^+0.06 -0.19_-0.02^+0.02 450.48/662 25.000 25.600 -1.19_-0.03^+0.05 2.25_-0.03^+0.02 -0.04_-0.02^+0.03 443.31/662 25.600 26.300 -1.38_-0.03^+0.05 2.20_-0.03^+0.03 -0.19_-0.02^+0.03 417.92/662 26.300 27.800 -1.66_-0.03^+0.03 2.19_-0.03^+0.05 -0.41_-0.02^+0.02 525.19/662 27.800 28.500 -1.50_-0.04^+0.03 2.49_-0.05^+0.08 -0.26_-0.02^+0.02 572.05/662 28.500 29.700 -1.56_-0.03^+0.03 2.51_-0.06^+0.07 -0.55_-0.02^+0.02 512.87/662 29.700 32.000 -1.66_-0.03^+0.03 2.40_-0.05^+0.08 -0.69_-0.02^+0.02 509.14/662 32.000 33.300 -1.78_-0.05^+0.05 2.03_-0.05^+0.07 -0.74_-0.03^+0.03 446.42/662 33.300 34.015 -1.73_-0.05^+0.04 2.24_-0.08^+0.10 -0.71_-0.03^+0.03 485.83/662 34.015 34.700 -1.63_-0.05^+0.05 2.21_-0.05^+0.09 -0.63_-0.04^+0.03 460.20/662 34.700 35.600 -1.63_-0.04^+0.04 2.23_-0.05^+0.07 -0.56_-0.03^+0.03 469.85/662 35.600 36.800 -1.56_-0.05^+0.05 2.05_-0.03^+0.05 -0.64_-0.04^+0.03 438.21/662 36.800 37.550 -1.63_-0.06^+0.05 2.07_-0.04^+0.06 -0.62_-0.04^+0.03 439.23/662 37.550 38.900 -1.62_-0.06^+0.04 2.19_-0.04^+0.09 -0.79_-0.04^+0.02 476.73/662 38.900 40.600 -1.81_-0.05^+0.04 2.17_-0.08^+0.15 -1.01_-0.03^+0.03 438.00/662 40.600 44.600 -1.91_-0.05^+0.03 1.76_-0.13^+0.07 -1.10_-0.04^+0.02 466.40/662 44.600 47.700 -1.84_-0.05^+0.05 1.94_-0.06^+0.07 -1.10_-0.03^+0.03 435.60/662 47.700 51.000 -1.75_-0.06^+0.04 2.14_-0.05^+0.12 -1.10_-0.04^+0.03 470.74/662 51.000 55.200 -1.83_-0.06^+0.06 1.70_-0.08^+0.06 -1.21_-0.05^+0.04 465.16/662 55.200 61.600 -1.95_-0.02^+0.07 1.32_-0.19^+0.23 -1.39_-0.02^+0.05 468.45/662 61.600 69.600 -1.94_-0.08^+0.05 (2.69_-0.13^+0.45)* -1.56_-0.06^+0.04 459.07/662 69.600 80.000 -1.94_-0.14^+0.10 (2.48_-0.19^+0.55) -1.80_-0.11^+0.08 455.71/662 80.000 90.000 -2.01_-0.12^+0.09 (2.69_-0.22^+0.89) -1.92_-0.09^+0.06 453.34/662 90.000 100.000 -2.17_-0.22^+0.07 unconstrained -2.23_-0.16^+0.04 397.65/662 100.000 115.000 -2.16_-0.39^+0.14 unconstrained -2.53_-0.32^+0.09 336.77/662 115.000 140.000 -2.56_-0.53^+0.22 unconstrained -3.01_-0.40^+0.12 264.86/662
http://arxiv.org/abs/2307.07367v1
20230714142212
Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow
[ "Maria del Rio-Chanona", "Nadzeya Laurentsyeva", "Johannes Wachs" ]
cs.SI
[ "cs.SI", "cs.AI", "cs.CY" ]
: A Neural Network for Image Captioning with Spatial Attention and Text Attributes Guoyun Tu Department of Computer Science KTH Royal Institute of Technology Stockholm, Sweden [email protected] Ying Liu Norna Stockholm, Sweden [email protected] Vladimir Vlassov Department of Computer Science KTH Royal Institute of Technology Stockholm, Sweden [email protected] August 12, 2023 ================================================================================================================================================================================================================================================================================================= Large language models like ChatGPT efficiently provide users with information about various topics, presenting a potential substitute for searching the web and asking people for help online. But since users interact privately with the model, these models may drastically reduce the amount of publicly available human-generated data and knowledge resources. This substitution can present a significant problem in securing training data for future models. In this work, we investigate how the release of ChatGPT changed human-generated open data on the web by analyzing the activity on Stack Overflow, the leading online Q&A platform for computer programming. We find that relative to its Russian and Chinese counterparts, where access to ChatGPT is limited, and to similar forums for mathematics, where ChatGPT is less capable, activity on Stack Overflow significantly decreased. A difference-in-differences model estimates a 16% decrease in weekly posts on Stack Overflow. This effect increases in magnitude over time, and is larger for posts related to the most widely used programming languages. Posts made after ChatGPT get similar voting scores than before, suggesting that ChatGPT is not merely displacing duplicate or low-quality content. These results suggest that more users are adopting large language models to answer questions and they are better substitutes for Stack Overflow for languages for which they have more training data. Using models like ChatGPT may be more efficient for solving certain programming problems, but its widespread adoption and the resulting shift away from public exchange on the web will limit the open data people and models can learn from in the future. § INTRODUCTION Over the last thirty years, humans have constructed a vast library of information on the web. Using powerful search engines anyone with an internet connection can access valuable information from online knowledge repositories like Wikipedia, Stack Overflow, and Reddit. New content and discussions posted online are quickly integrated into this ever-growing ecosystem, becoming digital public goods used by people all around the world to learn new technologies and solve their problems <cit.>. More recently, these public goods have been used to train artificial intelligence (AI) systems, in particular, large language models (LLMs) <cit.>. For example, the LLM ChatGPT <cit.> answers user questions by summarizing the information contained in these repositories. The remarkable effectiveness of ChatGPT is reflected in its quick adoption <cit.> and application across diverse fields including auditing <cit.>, astronomy <cit.>, medicine <cit.>, and chemistry <cit.>. Randomized control trials show that using LLMs significantly boosts productivity in computer programming, professional writing, and customer support tasks <cit.>. Indeed, the widely reported successes of LLMs like ChatGPT suggest that we will observe a significant change in how people search for, create and share information online. Ironically, if LLMs like ChatGPT present substitute traditional ways of searching and interrogating the web, then they will displace the very human behavior that generated their original training data. User interactions with ChatGPT are the exclusive property of OpenAI, its creator. Only OpenAI will be able to learn from the information contained in these interactions. As people begin to use LLMs instead of online knowledge repositories to find information, contributions to these repositories will likely decrease, diminishing the quantity and quality of these digital public goods. While such a shift would have significant social and economic implications, we have little evidence on whether people are actually substituting their consumption and creation of digital public goods with ChatGPT. The aim of this paper is to evaluate the impact of LLMs on the generation of open data on question-and-answer (Q&A) platforms. Since LLMs perform relatively well on software programming tasks <cit.>, we study Stack Overflow, the largest online Q&A platform for software development and programming. We present three results. First, we examine whether the release of ChatGPT has decreased the volume of posts, i.e. questions and answers, posted on the platform. We measure the overall effect of ChatGPT's release on Stack Overflow activity using a difference-in-differences model. We compare the weekly posting activity on Stack Overflow against that of four comparable Q&A platforms. These counterfactual platforms are less likely to be affected by ChatGPT either because their users are less able to access ChatGPT or because ChatGPT performs poorly in questions discussed on those platforms. We find that posting activity on Stack Overflow decreased by about 16% following the release of ChatGPT, increasing over time to around 25% within six months. Second, we investigate whether ChatGPT is simply displacing simpler or lower quality posts on Stack Overflow. To do so, we use data on up- and downvotes, simple forms of social feedback provided by other users to rate posts. We observe no change in the votes posts receive on Stack Overflow since the release of ChatGPT. This finding suggests that ChatGPT is displacing a wide variety of Stack Overflow posts, including high-quality content. Third, we study the heterogeneity of the impact of ChatGPT across different programming languages discussed on Stack Overflow. We test for these heterogeneities using an event study design. We observe that posting activity in some languages like Python and Javascript has decreased significantly more than the global site average. Using data on programming language popularity on GitHub, we find that the most widely used languages tend to have larger relative declines in posting activity. Our analysis points to several significant implications for the sustainability of the current AI ecosystem. The first is that the decreased production of open data will limit the training of future models <cit.>. LLM-generated content itself is an ineffective substitute for training data generated by humans for the purpose of training new models <cit.>. One analogy is that training an LLM on LLM-generated content is like making a photocopy of a photocopy, providing successively less satisfying results <cit.>. And while human feedback to LLMs may facilitate continued learning, such feedback remains private information. This suggests a second issue: ChatGPT's initial advantage can compound if it effectively learns from its interactions with users while simultaneously crowding out the generation of new open data <cit.>. More broadly, a shift from open data to a more closed web will likely have significant second-order impacts on the digital economy and how we access and share information. The rest of the paper is organized as follows. We introduce our empirical set-up, including the data and models used in our analysis, in Section <ref>. Section <ref> presents our results. In Section <ref>, we discuss their implications. We argue that our findings of a significant decline in activity on Stack Overflow following the release of ChatGPT have important implications for the training of future language models, competition in the artificial intelligence sector, the provision of digital public goods, and how humans seek and share information. § DATA AND METHODS §.§ Stack Exchange and Segmentfault data To understand the effect ChatGPT can have on digital public goods, we compare the change in Stack Overflow's activity with the activity on a set of similar platforms. These platforms are similar to Stack Overflow in that they are technical Q&A platforms, but are less prone to substitution by ChatGPT given their focus or target group. Specifically, we focus on the Stack Exchange platforms Mathematics and Math Overflow and on the Russian-language version of Stack Overflow. We also examine a Chinese-language Q&A platform on computer programming called Segmentfault. Mathematics and Math Overflow focus on university- and research-level mathematics questions respectively. We consider these sites to be less susceptible to replacement by ChatGPT given that, during our study's period of observation, the free-tier version of ChatGPT performed poorly (0-20th percentile) on advanced high-school mathematics exams <cit.>, and was therefore unlikely to serve as a suitable alternative to these platforms. The Russian Stack Overflow and the Chinese Segmentfault have the same scope as Stack Overflow, but target users located in Russia and China, respectively. We consider these platforms to be less affected by ChatGPT given that ChatGPT is officially unavailable in the Russian Federation, Belarus, Russian-occupied Ukrainian territory, and the People's Republic of China. Although people in these places can and do access ChatGPT via VPNs <cit.>, such barriers still represent a hurdle to widespread fast adoption. We extract all posts (questions or answers) on Stack Overflow, Mathematics, Math Overflow, and Russian Stack Overflow from their launch to early June 2023 using <https://archive.org/details/stackexchange>. We scraped the data from Segmentfault directly. Our dataset comprises 58 million posts on Stack Overflow, over 900 thousand posts for the Russian-language version of Stack Overflow, 3.5 million posts on Mathematics Stack Exchange, 300 thousand posts for Math Overflow, and about 300 thousand for Segmentfault. We focus our analysis on data from January 2019 to June 2023, noting that our findings are robust to alternative time windows. For each post, our dataset includes the number of votes (up – positive feedback, or down – negative feedback) the post received, the author (user), and whether the post is a question or an answer. Furthermore, each post can have up to 5 tags – predefined labels that summarize the content of the post, for instance, an associated programming language. For more details on the data used, we refer the reader to section <ref>. From this point forward, we will refer to Mathematics, Math Overflow, Russian Stack Overflow, and Segmentfault, along with their corresponding posts, as the counterfactual platforms and posts. §.§ Models Difference-in-differences We estimate the effect of ChatGPT for posting activity on Stack Overflow using a difference-in-differences method with four counterfactual platforms. We aggregate posting data at platform- and week-level and fit a regression model using ordinary least squares (OLS): IHS(Posts_p,t) = α_p +λ_t + β× Treated_p,t + ∑_p∈ Pθ_pt + ϵ_p,t where Posts_p,t is the number of posts on platform p in a week t, which we transform using the inverse-hyperbolic sine function (IHS) <cit.>.[We prefer this transformation because then the coefficient of interest can be roughly interpreted as a percent change in posting activity. The IHS behaves similarly to a natural log transformation for positive values but remains defined for zeroes. Our estimates are qualitatively similar to using log transformation, standardization or raw data.] α_p are platform fixed effects, λ_t are time (week) fixed effects, θ_p are platform-specific linear time trends, and ϵ_p,t is the error term. The coefficient of interest is β, which captures the estimated effect of ChatGPT on posting activity on Stack Overflow relative to the less affected platforms: Treated equals one for weeks after the release of ChatGPT (starting November 27, 2022) when the platform p is Stack Overflow and zero otherwise. We report robust standard errors clustered at the monthly level. To check the dynamics of the effect and to examine pretrends, we employ a similar specification but instead of β× Treated_p,t, we use ∑_t β_t × I(week = t) × I(platform = StackOverlow). We standardize the effects to 0 in the week before the public release of ChatGPT by dropping the indicator for that week from the regression. Separate coefficients for 25 weeks following the release of ChatGPT show how the effects of ChatGPT realized over time. Separate coefficients for the first 100 weeks before the release allow us to verify that posts on Stack Overflow had evolved similarly to the activity on counterfactual platforms prior to the introduction of ChatGPT. The advantage of the difference-in-differences method compared to a simple event study with Stack Overflow data only is that we estimate ChatGPT effects net of possible weekly shocks that are common across the technical Q&A platforms. For the interpretation of the coefficient, we note that we estimate relative change in posting activity on Stack Overflow compared to activity on other platforms before vs. after the release of ChatGPT. Event Study When analyzing the effect of ChatGPT on activity across programming languages, we can no longer compare data from Stack Overflow with the counterfactual platforms. This is because the tags annotating posts are different between Stack Exchange platforms. Therefore, we study ChatGPT's heterogeneous effects using an event-study specification. For each programming language i (identified by a tag), we model the standardized number of posts in a week t on Stack Overflow by fitting a simple linear time trend with seasonal effects: Posts_i,t= β_0 +β_1(t) + β_2(ChatGPT) + β_3(t × ChatGPT) + η + ϵ_i,t where t is the linear time trend and η are seasonal (month of year) fixed effects. ChatGPT equals one if the week t is after the release of ChatGPT and zero otherwise. Coefficient β_2 captures the change in the intercept and coefficient β_3 reflects the change in the slope of the time trend following the release of ChatGPT. In the tag-level analysis, we standardize the dependent variable in order to be better able to compare effects across programming languages with different numbers of posts.[We standardize the number of posts within each tag by subtracting the mean and dividing by the standard deviation. Both statistics are calculated before the release of ChatGPT.] We report HAC standard errors. § RESULTS §.§ Decrease in posting activity Figure <ref>A, shows the evolution of activity on Stack Overflow from January 2016 to June 2023. Up to 2022 there was a gradual decrease in activity from roughly 110,000 to 60,000 posts per week, that is roughly 7,000k posts less per week each year. However, after the release of ChatGPT (November 30th, 2022) posting activity decreased sharply, with the weekly average falling from around 60,000 posts to 40,000 within six months. Compared to the pre-ChatGPT trend, this decrease represents more than five years worth of deceleration in just half a year. The decrease in activity on Stack Overflow is larger than for similar platforms for which we expect ChatGPT to be a less viable substitute. Figure <ref>B shows the standardized posting activity on Stack Overflow, the Russian- and Chinese-language counterparts of Stack Overflow, and two mathematics Q&A platforms. We standardize posting activity by the average and standard deviation of post counts within each platform prior to the release of ChatGPT. Figure <ref>B shows that Stack Overflow activity deviates markedly from activity on the other platforms after the release of ChatGPT. The plot visualizes the standardized posting activity within each platform since early 2022. Smoothed weekly activity varies between plus and minus two standard deviations for all platforms for most of 2022. Events, such as the Chinese New Year and other holidays and the start of the Russian invasion of Ukraine, are visible. Following the release of ChatGPT, we observe a significant decline in activity on Stack Overflow. We report the estimated effect of our difference-in-differences model in Table <ref> and visualize the weekly estimates of the relative change in the Stack Overflow activity in Figure <ref>. Table <ref> indicates that ChatGPT decreased posting activity on Stack Overflow by 15.6% (1-e^-0.17). These results are robust to changes in the controls and starting point of the data time series. We also tested for heterogeneity in subsets of the data: considering only questions (rather than counting both questions and answers) and posts on weekdays. In both subsets our estimates did not deviate significantly from the main result: we estimate a 12% relative decrease in questions and 14% relative decrease in posts on weekdays. Figure <ref> shows that the impact of ChatGPT is increasing over time and is by the end of our study greater in magnitude than the average post-ChatGPT effect estimated in Table <ref>. By the end of April 2023, the estimated effect stabilizes at around 25%. Interestingly, ChatGPT use, in general, peaked around this time.[<https://www.similarweb.com/blog/insights/ai-news/chatgpt-bard/>] Voting activity A decrease in overall activity on Stack Overflow does not necessarily signify a problem; it could indicate a beneficial shift toward fewer but higher quality posts, as less valued or simplistic questions may be outsourced to ChatGPT. We investigate this possibility using data on voting activity but observe no significant change in the typical appreciation of posts after ChatGPT's release. The time series of upvotes and downvotes, which we use as a proxy for the overall quality of posts, remain stable across the release of ChatGPT. Specifically, Figure <ref> reports the average number of upvotes and downvotes that posts from a given week receive within five weeks of their creation. Upvotes are shown in grey and downvotes in blue; neither series changes significantly. Indeed the relative stability of voting behavior suggests that the quality of posts on Stack Overflow has not meaningfully changed after the introduction of ChatGPT. §.§ Heterogeneities across tags Studying posts about different programming languages on Stack Overflow, we find significant heterogeneities in the impact of ChatGPT on posting behavior across languages. In Facet A of Figure <ref>, we plot the estimated effects (slope changes in the linear time trend after the introduction of ChatGPT) for those 69 tags that we connected to a programming language on GitHub. We estimate a negative effect of ChatGPT for most tags, but the estimates range between a 0.25 standard deviation decrease in slope (i.e. change per week following the ChatGPT release) to a 0.03 standard deviation increase. We observe that some of the widely used languages like Python and Javascript are the most impacted by ChatGPT. Interestingly, the model estimates that posts about CUDA have increased (though not significantly) after ChatGPT was released. CUDA is an application programming interface created by Nvidia, a graphics card manufacturer, that facilitates the use of graphics cards for computational tasks, in particular for machine learning and artificial intelligence. This exception again demonstrates the impact of ChatGPT on the world of computer programming: people are increasingly interested in software relating to artificial intelligence. In Facet B, we compare the estimated impact of ChatGPT on different languages against salary data of developers using those languages. We source salary data from the 2022 Stack Overflow developer survey, focusing on US-based developers and calculating medians of reported salaries. We observe no clear relationship between the estimated labor market value of a specific language and changes in posting behavior in that language post-ChatGPT. To better understand the relationship between the size of the user base of a programming language and how it is impacted by ChatGPT, we compare our estimates with data from GitHub, the largest online platform for collaborative software development. Among other sources, ChatGPT was trained on data from GitHub. Because training data was collected up to September 2021, we use data on language use on GitHub up to June 2021. In Facet C of Figure <ref>, we visualize the relationship between the number of GitHub repositories (coding projects) in a specific language and the estimated impact of ChatGPT on that language. We observe that languages with more GitHub repositories tend to be more significantly impacted by the release of ChatGPT in terms of associated activity on Stack Overflow (Pearson's ρ = -0.45, p<.001). § DISCUSSION The rate at which people have adopted ChatGPT is one of the fastest in the history of technology <cit.>. It is essential that we better understand what activities this new technology displaces and what second-order effects this substitution may have <cit.>. This paper shows that after the introduction of ChatGPT there was a sharp decrease in human content creation on Stack Overflow. We compare the decrease in activity on Stack Overflow with other Stack Exchange platforms where current LLMs are less likely to be used. Using a difference-in-differences model, we find about 16% relative decrease in posting activity on Stack Overflow, with a larger effect in later months. We observed no large change in social feedback on posts, measured using votes, following ChatGPT's release, suggesting that average post quality has not changed. Posting activity related to more popular programming languages decreased more on average than that for more niche languages. These results suggest that users partially substituted Stack Overflow with ChatGPT. Consequently, the wide adoption of LLMs can decrease the provision of digital public goods, in particular, the open data previously generated by interactions on the web. Our results and data have some shortcomings that point to open questions about the use and impact of LLMs. First, while we can present strong evidence that ChatGPT decreased the posting activity in Stack Overflow, we can only partially assess quality of posting activity using data on upvotes and downvotes. Users may be posting more challenging questions, ones that LLMs cannot (yet) address, to Stack Overflow. Future work should examine whether continued activity on Stack Overflow is more complex or sophisticated on average than posts from prior to ChatGPT release. Similarly, ChatGPT may have reduced the volume of duplicate questions about simple topics, though this is unlikely to impact our main results as duplicates are estimated to account for only 3% of posts <cit.>, and we do not observe significant changes in voting outcomes. A second limitation of our work is that we cannot observe the extent to which Russian- and Chinese-language users of the corresponding Q&A platforms are actually hindered from accessing ChatGPT; indeed recent work has shown a spike in VPN and Tor activity following the blocking of ChatGPT in Italy <cit.>. Given the potential economic importance of ChatGPT and similar LLMs, it is anyway essential that we better understand how such bans and blocks impact the accessibility of these tools <cit.>. Finally, we do not address the issue that ChatGPT may be used to generate Stack Overflow content. Stack Overflow policy effectively banned posts authored by ChatGPT within a week of its release. In any case, a significant amount of ChatGPT activity on Stack Overflow would mean that our measures underestimate the effect of ChatGPT. Despite these shortcomings, our results have important implications for the future of digital public goods. Before the introduction of ChatGPT, more human-generated content was posted to Stack Overflow, forming a collective digital public good due to their non-rivalrous and non-exclusionary nature – anyone with internet access can view, absorb, and extend this information, without diminishing the value of the knowledge. Now, this information is rather fed into privately owned LLMs like ChatGPT. This represents a significant and trending shift of knowledge from the public domain to the private ones. This observed substitution effect poses several issues for the future of artificial intelligence in general. The first is that if language models crowd out open data creation, they will be limiting their own future training data and effectiveness. The second is that owners of the current leading models have exclusive access to user inputs and feedback, which, with a relatively smaller pool of open data, gives them a significant advantage against new competitors in training future models. Third, the decline of public resources on the web would reverse progress made by the web toward democratizing access to knowledge and information. Finally, the consolidation of humans searching for information around one or a few language models could narrow our explorations and focus our attention on mainstream topics. We briefly elaborate on these points, then conclude with a wider appeal for more research on the political economy of open data and AI, and how we can incentivize continued contributions to digital public goods. Training future models Our findings suggest that the widespread adoption of ChatGPT may make it difficult to train few iterations <cit.>. Though researchers have already expressed concerns about running out of data for training AI models <cit.>, our results show that the use of LLMs can slow down the creation of new data. Given the growing evidence that data generated by LLMs cannot effectively train new LLMs <cit.>, modelers face the real problem of running out of useful data. If ChatGPT truly is a “blurry JPEG” of the web <cit.>, then, in the long run, it cannot effectively replace its most important input: data derived from human activity. The proliferation of LLMs has already impacted other forms of data creation: many Amazon Mechanical Turk workers now generate content (i.e. respond to surveys, evaluate texts) using ChatGPT <cit.>. Competition in the artificial intelligence sector A firm's early advantage in technological innovation often leads to significant market share <cit.>. In our case, ChatGPT is simultaneously decreasing the amount of open training data that competitors could use to build competing models, while capturing a valuable private source of user data. There is also a growing concentration in tech driven by a shift from companies going public to acquisitions <cit.> – indeed OpenAI is partially owned by Microsoft. These forces may lead to a compounding advantage for OpenAI. Though firms have long used the massive amounts of open data created by users of platforms like Wikipedia, Stack Overflow, GitHub, OpenStreetMap or Reddit to create products and capture value <cit.>, these products have not generally replaced those platforms. Lost economic value Digital public goods generate value in many ways besides feeding LLMs and other algorithms. For instance, Wikipedia is an important source of information worldwide, but in developing countries, readers are more often motivated by intrinsic learning goals and tend to read articles in greater detail <cit.>. Unequal access to artificial intelligence may also compound inequalities in growth and innovation between countries <cit.>. Digital public goods also provide direct value to the many websites that extract data from open data to complement their core services with extra information <cit.>. For instance, there is substantial interdependence between sites like Wikipedia, Reddit, and Stack Overflow and the search engines that use them to enrich responses to user queries via infoboxes <cit.>. Contributors to digital public goods like Stack Overflow or Open Source Software (OSS) often enjoy indirect benefits <cit.>. For instance, while OSS itself provides significant value in the global economy <cit.>, OSS contributions are valuable signals of a firm's capabilities to investors <cit.>. Individual contributions to Stack Overflow are used to signal ability on the labor market <cit.>. Any general tendency of ChatGPT to crowd out contributions to digital public goods, may limit these valuable signals that reduce economic frictions. On the other hand, such signaling activity may serve as a powerful incentive to keep people contributing. Narrowing of information seeking The substitution effect we report likely has important second-order effects on how people search for information and their exposure to new ideas. LLMs likely favor well-established perspectives and due to their efficiency decrease the need for users to forage for information. These features of LLMs may reinforce a trend observed earlier in the context of the web. Specifically, internet search engines are thought to have pushed science toward consensus and narrower topics by improving efficiency of information search and improving the visibility of mainstream information <cit.>. LLMs may also disincentivize the use of new or niche tools because they most amplify our productivity with those tools for which it has much training data. For instance, ChatGPT may not be able to help users of a new programming language that is has not seen many examples of. Given that LLMs are poised to change how we do research <cit.> and present a strong competitor to search engines <cit.>, we need to understand what LLM efficiency implies for our contact with diverse sources of information and incentives to try new things. More generally, models like ChatGPT are going to generate political and economic winners and losers like many previous breakthrough technologies. While early evidence shows that these models enhance productivity especially among new and inexperienced workers <cit.>, there are other ways in which they may contribute to inequality between people and firms <cit.>, for instance via potential negative side effects of automation <cit.>. Our results suggest that the economics of data creation and ownership will become more salient: as data becomes more valuable, there will be growing interest in how creators of data can capture some of that value <cit.>. These multi-faceted aspects of the impact of LLMs suggest that the political economy of data and AI will be especially important in the next years <cit.>. In this context, our work highlights the specific issue that valuable digital public goods may be under-produced as a result of the proliferation of AI. A natural follow-up question is how we can incentivize the creation of such goods. While unemployment shocks are known to increase the provision of digital public goods <cit.>, it would be an unsatisfying solution to suggest that people put out of work by automation will fill this gap. In the case of platforms like Stack Overflow, active users are often motivated by social feedback and gamification <cit.>, but the continual onboarding of new users is what keeps these platforms relevant in the long run <cit.>. For the sake of a sustainable open web and an AI ecosystem that draws on its data, we should think about how to keep people exchanging information and knowledge online. § APPENDIX §.§ Data Stack Exchange platform sites The raw dataset obtained from <https://archive.org/details/stackexchange> contains nearly all posting activity on the question and answer platforms hosted on the Stack Exchange network from its launch in 2008 to early June 2023. These include Stack Overflow, its Russian language version, and Math Overflow and Math Stack Exchange. Stack Overflow is the largest online Q&A platform for topics relating to computer programming and software development. It provides a community-curated discussion of issues programmers face <cit.>. Questions have multiple answers, and users debate the relative merits of solutions and alternatives in comments. A track record on Stack Overflow has value on the labor market as a signal of an individual's skills <cit.>. The data contains over 58 million posts, including both questions and answers. Posts are linked to their posting users, from which we infer poster previous activity and can identify posts made by new users. Questions are annotated with tags indicating the topic of the post including programming languages used. Users can give posts upvotes or downvotes, providing posting users with social feedback and reputation points. The Russian language version of Stack Overflow (over 900 thousand posts) and the mathematics-oriented platforms Math Stack Exchange (over 3.5 million posts) and Math Overflow (over 300 thousand posts) have identically structured data dumps hosted in the same location. Registered users can upvotes and downvote posts made on Stack Exchange platforms. These votes provide a valuable signal of the value of posts <cit.>. They are the primary way users earn reputation points and status on Stack Exchange platforms. Votes also influence the ranking of posts in user feeds and search engine results, facilitating information filtering. Downvotes are used to moderate. The Stack Exchange data dump contains data on every vote cast, including the corresponding post, the date the vote was made, and whether it was an upvote or downvote. Segmentfault Segmentfault is a Chinese language platform with a Q&A platform for developers that has many similarities with the Stack Exchange sites. Users post questions on programming language topics and other users post answers. Questions are tagged by relevant languages and technologies, and there are similar gamification elements on the platform. We scraped data on all posts as of early June 2023, gathering over 300 thousand in total. Selection of tags Stack Overflow posts are annotated by tags which describe the concepts and technologies used in the post. For example, many tags indicate programming languages, webframeworks, database technologies, or programming concepts like functions or algorithms. Stack Overflow reconciles tags referring to the same things via a centralized synonym dictionary. We selected the 1,000 most used tags up to early June 2023, and focused on those 69 which could be directly linked to language statistics reported by GitHub, described next. GitHub data on programming language use We use data from the June 2021 GHTorrent data dump <cit.> as a proxy measure for the amount of open data available for each programming language. The dataset reports which languages are used in each project or repository on GitHub. We simply count the number of repositories mentioning each language. We then link the languages with tags on Stack Overflow. As an alternative, we count the number of commits, elemental code contributions to repositories, to each repository, hence language. In the main paper we visualize the estimated effects of ChatGPT on specific tags that we can link to GitHub languages. We exclude some tags which refer to file formats or plain text, specifically: yaml, json, text, svg, markdown, and xml. §.§ Data and Code availability Data and code to reproduce our analyses will be made available in a subsequent draft. The Stack Overflow data dump is available here: <https://archive.org/details/stackexchange>. §.§ Acknowledgments We thank Frank Neffke, Gergő Tóth, Christoffer Koch, Sándor Juhász, Martin Allen, Manran Zhu, Karl Wachs, László Czaller, and Helene Strandt for helpful comments and discussions. agsm
http://arxiv.org/abs/2307.05975v1
20230712074413
Outlier detection in regression: conic quadratic formulations
[ "Andrés Gómez", "José Neto" ]
math.OC
[ "math.OC", "cs.LG", "stat.ME", "stat.ML" ]
Transformers in Reinforcement Learning: A Survey Samira Ebrahimi Kahou August 12, 2023 ================================================ In many applications, when building linear regression models, it is important to account for the presence of outliers, i.e., corrupted input data points. Such problems can be formulated as mixed-integer optimization problems involving cubic terms, each given by the product of a binary variable and a quadratic term of the continuous variables. Existing approaches in the literature, typically relying on the linearization of the cubic terms using big-M constraints, suffer from weak relaxation and poor performance in practice. In this work we derive stronger second-order conic relaxations that do not involve big-M constraints. Our computational experiments indicate that the proposed formulations are several orders-of-magnitude faster than existing big-M formulations in the literature for this problem. § INTRODUCTION Several statistical and machine learning problems can be formulated as optimization problems of the form min_x,z ∑_i=1^m (y_i-a_i^⊤ x)^2(1-z_i) s.t. (x,z)∈ F⊆^n×{0,1}^m, where (a_i,y_i)∈ℝ^n+1 for all i∈{1,…,m} are given data and F is the feasible region. Problem (<ref>) includes the least trimmed squares as a special case, which is a focus of this paper and discussed at length in <ref>, but also includes regression trees <cit.> (where 1-z_i=1 indicates that a given datapoint is routed to a given leaf), regression problems with mismatched data <cit.> (where variables z indicate the datapoint/response pairs) and k-means <cit.> (where variables z represent assignment of datapoints to clusters). We point out that few or no mixed-integer optimization (MIO) approaches exist in the literature for (<ref>), as the problems are notoriously hard to solve to optimality, and heuristics are preferred in practice. The hardness of problem (<ref>) is due to weak relaxations such as standard big-M relaxations, producing trivial lower bounds of 0 and gaps of 100%. The purpose of this work is thus to derive stronger relaxations of (<ref>), paving the way for efficient exact methods via MIO. §.§ Robust estimators and least trimmed squares Most statistical methods fail if the input data is corrupted by so-called outliers. The latter correspond to erroneous input data points resulting, e.g., from measurement, transmission, recording errors or exceptional phenomena. Consider linear regression models, described by observations {(a_i,y_i)}_i=1^m where a_i∈ℝ^n are the features and y_i is the response associated with datapoint i. The classical ordinary least squares (OLS) estimator, defined as the minimizer of the optimization problem OLSmin_x∈^n∑_i=1^m(y_i-a_i^⊤ x)^2=min_x∈^ny-Ax_2^2, where A∈^m× n is the matrix with rows given by {a_i}_i=1^m, is known to be sensitive to spurious perturbations of the data. Two robust modifications of (<ref>) are commonly used in practice. The first one calls for the addition of an additional regularization term, resulting in the least squares with Tikhonov regularization problem. Specifically, given a suitable matrix T (typically taken as the identity), the estimator is the optimal solution of min_x∈^n∑_i=1^m(y_i-a_i^⊤ x)^2+λTx_2^2,LS+L2 which is robust against small perturbations of the data <cit.>. The second approach calls for replacing the least squares loss with the absolute value of the residuals, resulting in least absolute deviations (LAD) problem LADmin_x∈^n∑_i=1^m| y_i-a_i^⊤ x|. Estimator (<ref>), which generalizes the median to multivariate regression, is preferred to (<ref>) in settings with outliers. Despite their popularity, (<ref>) and (<ref>) are known to be vulnerable to outliers. Robust estimators are often measured according to the breakdown point <cit.> – the smallest proportion of contaminated data that can cause the estimator to take arbitrarily large aberrant values. Clearly, estimators (<ref>) and (<ref>) have an unfavorable breakdown point of 0%: a single spurious observation with a_i=e_j, where e_j is the j-th standard basis vector, and y_i→±∞ will produce solutions where x_j takes arbitrarily bad values. M-estimators <cit.>, which include as special cases (<ref>) and regression with respect to the Huber loss, also have breakdown point of 0% <cit.>. Robust estimators with better breakdown point include the least median of squares (LMS) <cit.> which minimizes the median of the squared residuals. The least quantile of squares (LQS) <cit.> approach generalizes the latter by minimizing the q-th order statistic, i.e., the q-th smallest residual in absolute value for some given integer q≤ m. The least trimmed squares problem (LTS) <cit.>, consists in minimizing, for some h∈ℤ, the sum of the smallest h residual squares. Specifically, letting r_i(x)=|y_i-a_i^⊤ x| be the i-th residual, and letting |r_(1)(x)|≤|r_(2)(x)|≤…≤|r_(m)(x)| be the residuals sorted in nondecreasing magnitude order, the LTS estimator is the optimal solution of min_x∈ℝ^n∑_i=1^hr_(i)(x)^2+λTx_2^2. LTS+L2 Note that for h=m, (<ref>) corresponds to (<ref>). Intuitively, for h≤ m-1, the datapoints corresponding to the m-h largest residuals are observations flagged as outliers and discarded prior to using (<ref>) to fit a model on the remaining data. The original LTS estimator had λ=0, but we consider here the version with additional ℓ_2 regularization used in <cit.>, where the additional regularization helps counteracting strong collinearities between features and improves performance in low signal-to-noise regimes. The LMS and LTS estimators achieve an optimal breakdown point of 50% <cit.>. While LMS was more popular originally, as it is less difficult to compute, Rousseeuw and Van Driessen <cit.> argue that “the LMS estimator should be replaced by the LTS estimator" due to several desirable properties, including smoothness and statistical efficiency. Unfortunately, computing the LTS estimator is NP-hard <cit.> and even hard to approximate <cit.>. For the most part, problem (<ref>) is solved using heuristics. In particular, methods which alternate between fitting regression coefficients given a fixed set of h non-outlier observations and determining new outliers given fixed regression coefficients x̅ are popular in the literature <cit.>. Solution methods based on solving least trimmed squares with similar iterative approaches have also been proposed in the context of mixed linear regression with corruptions and more general problems, e.g. <cit.> and references therein. Under some specific assumptions on the model, convergence results to an optimal solution have been established for such algorithmic schemes <cit.>. However, in general, they do not provide guarantees and the quality of the resulting estimators can be poor. Agulló <cit.> proposed a branch and bound algorithm to solve (<ref>) to optimality, which is shown to be fast in instances with m≤ 30, but struggles in larger instances. A first MIO formulation for (<ref>) was proposed in <cit.>, although the authors observe that the resulting optimization problem is difficult to solve and do not provide computations. To the best of our knowledge, the first implementation of a MIO algorithm for (<ref>) was done in <cit.>, based on a formulation using big-M constraints, where the authors report solution times of two seconds for instances with m=25 and also comment on larger computational times for larger instances. In a subsequent work by the same research group <cit.>, the authors report solution times in seconds for problems with n=2 and m≤ 50, and in minutes for problems with 100≤ m≤ 500, although all computations are performed on synthetic data. In a recent paper, <cit.> propose another big-M formulation for a generalization of (<ref>) (where sparisty is also imposed on regression variables x), and report computational times in minutes for synthetic instances with n and m in the low hundreds. We discuss these MIO approaches further in <ref>. Finally, we point that exact big-M based MIO algorithms and continuous optimization heuristics were proposed in <cit.> for the related LMS problem: the authors report that MIO methods are dramatically outperformed by the continuous optimization approaches, with the objective value of MIO solutions being up to 400x worse than the objective of heuristic solutions (unless the heuristics solutions are used as a warm-start). §.§ Contributions, outline and notation In this work, we introduce strong, big-M free, conic quadratic reformulations for (<ref>) –and, more generally, problems of the form (<ref>). Extensive computational experiments on diverse families of instances (both synthetic and real) clearly point out strong improvements over current state-of-the-art approaches. In particular, the proposed formulations results in orders-of-magnitude improvements over existing big-M formulations in our computations. We refrain from providing an estimate of the scalability of the approach: we show instances with (n,m)=(20,500) that are solved in 10 seconds, and instances with (n,m)=(4,50) that cannot be solved within a time limit of 10 minutes. Indeed, for MIO approaches, the effectiveness of the approach depends on more factors than simply the size of the instance, including the number m-h of observations to be discarded, the regularization parameter λ, and the overall structure of the dataset (with synthetic instances being considerably easier than real ones). The paper is organized as follows. We close this section with some notation. In <ref> we review the literature on MIO approaches for linear regression problems. Convexification results related to sets originating from (<ref>) are presented in <ref>. The convexifications are used to derive conic quadratic reformulations of (<ref>) in <ref>. The experimental framework and computational results are presented in <ref> and we conclude the paper in <ref>. Notation. For any positive integer n, let [n] stand for the set {1,2,…,n}. The vectors and matrices are represented with bold characters. The all-zero and all-one vectors and matrices (with appropriate dimensions) are represented by 0 and 1 respectively. The i-th unit vector is represented by e_i. The notation I stands for the identity matrix. Given a vector d∈^n, we let Diag(d)∈^n× n denote the diagonal matrix with elements Diag(d)_ii=d_i. Given a square matrix Q, we let Q^† denote the pseudoinverse of Q. § REVIEW OF MIO METHODS FOR OUTLIER DETECTION There has been a recent trend of using mathematical optimization techniques to tackle hard problems arising in the context of linear regression. In particular, there is a stream of research focused on the best subset selection problem <cit.>, in which at most k of the regression variables in (<ref>) can take non-zero values. Variants of best subset selection, in which information criteria are used to determine the number of non-zero variables, have also been considered in the literature <cit.>. Related models have also been used to tackle inference problems with graphical models and sparsity <cit.>. We point out that most of the approaches for sparse regression are based on improving continuous relaxations by exploiting a ridge regularization term λx_2^2 through the perspective reformulation <cit.>. As we show in this paper, the Tikhonov regularization λTx_2^2 is also fundamental for improving relaxations for (<ref>). Despite the plethora of MIO approaches for sparse regression, there is a dearth of similar methods for regression problems with outliers. Indeed, problems such as (<ref>) appear to be fundamentally more difficult than sparse regression problems. Observe that problem (<ref>) admits the natural mixed-integer cubic formulation <cit.> min_x∈ℝ^n,z∈{0,1}^m ∑_i=1^m(y_i-a_i^⊤ x)^2(1-z_i)+λTx_2^2 s.t. 1^⊤ z≤ m-h, where z_i=1 if datapoint i is flagged as an outlier and discarded, and z_i=0 otherwise. Note that (<ref>) is a special case of (<ref>), where F is given by a cardinality constraint. Formulation (<ref>) cannot be effectively used with most MIO software. Indeed, its natural continuous relaxation, obtained by relaxing the binary constraints to bound constraints 0≤z≤1, is non-convex. To circumvent this issue, Zioutas et al. <cit.> reformulated (<ref>) as the convex quadratic mixed integer optimization problem min_u,x,z ∑_i=1^mu_i^2+λTx_2^2 s.t. -y_i+a_i^⊤x≤ u_i+z_i M ∀ i∈ [m] y_i-a_i^⊤x≤ u_i+z_i M ∀ i∈ [m] 1^⊤z≤ m-h u∈ℝ^m_+, x∈ℝ^n, z∈{0,1}^m where M is a sufficiently large fixed constant and u_i represents the absolute value of the i-th residual. Indeed, in any optimal solution (u^*,x^*,z^*) of (<ref>), having z_i^*=1 (resp. z_i^*=0) implies u_i^*=0 (resp. u_i^*=|y_i-a_i^⊤x|), i.e. the objective value is sum of the squared residuals of the non-outlier datapoints. While formulation (<ref>) can be directly used with most MIO solvers, the natural continuous relaxation is trivial. Indeed, regardless of the data (A,y,T), an optimal solution of the continuous relaxation is given by u^*=0, x^*=0 and z^*=((m-h)/m)1. The objective value of this relaxation is thus equal to the trivial lower bound of 0 (resulting in a 100% optimality gap), which leads to large branch-and-bound trees as solvers cannot effectively prune the search space. Moreover, the solutions of the continuous relaxations are essentially uninformative, thus MIO solvers –which rely on these to produce feasible solutions and inform branching decisions– struggle to tackle problem (<ref>). In fact, as we show in <ref>, any relaxation based on a convex reformulation of the individual cubic terms (y_i-a_i^⊤ x)^2(1-z_i) necessarily results in trivial bounds and solutions. We point out that this phenomenon sets apart regression problems with outliers from sparse regression problems: the continuous relaxations of the natural big-M formulations of sparse regression problems (e.g., see <cit.>) is equivalent to the least squares problem (<ref>), producing non-trivial bounds and solutions. We conjecture that the difficulty to produce a “reasonable" convex relaxation of (<ref>) is the reason why few MIO approaches exist for regression problems with outliers. A notable exception is <cit.> , which proposes strong conic quadratic formulations for outlier detection with time series data. However, the methodology proposed in that paper is tailored to time series data and cannot be generalized to problem (<ref>). § CONVEXIFICATION RESULTS In this section we investigate the convex hull of sets related to terms arising in the formulation of problems such as (<ref>). To motivate our approach, let us first study the convex hull of the set Y_c = {(x,z,t)∈ℝ^n×{0,1}×ℝ t≥(c-a^⊤x)^2(1-z) } where c∈ℝ is a scalar. Y_c may be interpreted as the mixed-integer epigraph of the error function associated with a single datapoint. As Proposition <ref> below shows, any closed convex relaxation of Y_c is trivial. In other words, any formulation of (<ref>) based only on convex reformulations of each individual cubic term will result in 100% gaps and non-informative relaxations. The closure of the convex hull of Y_c is given by (Y_c)=^n× [0,1]×_+. Consider any point (x̅,z̅,t̅)∈^n× [0,1]×_+ with 0<z̅<1. Observe that (x̅,z̅,t̅)= z̅( x̅/z̅-1-z̅/z̅c ·a/a_2^2,1,0)+(1- z̅) (c·a/a_2^2,0,t̅/1-z̅), where both ( x̅/z̅-1-z̅/z̅c ·a/a_2^2,1,0)∈ Y_c and (c·a/a_2^2,0,t̅/1-z̅)∈ Y_c, and thus (x̅,z̅,t̅)∈(Y_c). Moreover, since (x̅,0,t̅)=lim_z→ 0^+(x̅,z,t̅), we find that (x̅,z̅,t̅)∈(Y_c) even if z=0. Thus, to derive stronger relaxations, it is necessary to study a more general set, capturing more structural information about the optimization problem. In particular, the formulations we propose to tackle problem (<ref>) are based on a study of the set Y_c,Q = {(x,z,t)∈ℝ^n×{0,1}×ℝ t≥x^⊤Qx+ (c-a^⊤x)^2(1-z) } where Q∈ℝ^n× n represents a symmetric and positive definite matrix. We provide hereafter descriptions of the convex hull of Y_c,Q in the original space of variables for the homogeneous case (i.e., when c=0) and in an extended space in the non-homogeneous case (c≠ 0). §.§ Convexification for the homogeneous case The convexification of set Y_0,Q = {(x,z,t)∈ℝ^n×{0,1}×ℝ t≥x^⊤Qx+ (a^⊤x)^2(1-z) } admits a relatively simple description in the original space of variables. The closure of the convex hull of set Y_0,Q is (Y_0,Q)= {(x,z,t)∈ℝ^n×[0,1]×ℝ t≥x^⊤ Q x + (1-z)(a^⊤x)^2/1+zQ^-1/2a_2^2}. Let T denote the set in the right-hand side of the equation in the statement of the proposition. We show next that: ∙ T is convex, ∙ T induces a relaxation of Y_0,Q, and ∙ optimization of a linear function over set T is equivalent to optimization over Y_0,Q. ∙ Convexity We show convexity of T by establishing it is equivalent to the SDP-representable set given by constraints 0≤ z≤ 1, [ W x; x^⊤ t ]≽ 0, W=Q^-1-(1-z)Q^-1aa^⊤ Q^-1/1+Q^-1/2a_2^2. Note that W≻ 0 since for any y≠0, y^⊤W y≥y^⊤Q^-1 y -(a^⊤ Q^-1 y)^2/1+ Q^-1/2a_2^2≥(y^⊤Q^-1y)·( 1-Q^-1/2a_2^2/1+Q^-1/2a_2^2)>0, where the second inequality uses Cauchy-Schwarz inequality and the last one follows from y≠0 and the definition of Q^-1≻ 0. Since W is invertible, we find by using the Schur complement <cit.> that [ W x; x^⊤ t ]≽ 0⇔ t≥x^⊤ W^-1x, and using the Sherman Morrison formula <cit.> we can establish that W^-1=Q+1-z/1+zQ^-1/2a_2^2aa^⊤. ∙ Relaxation Observe that if z=0 then T reduces to the inequality t≥x^⊤ Qx+(a^⊤ x)^2, and if z=1 then T reduces to t≥x^⊤ Qx. This is precisely the disjunction encoded by Y_0,Q, hence T is indeed a relaxation. ∙ Equivalence Now, to prove T⊆(Y_0,Q), let us consider the optimization of an arbitrary linear function over the sets Y_0,Q and T: min_(x,z,t)∈ Y_0,Qα^⊤x+β z+γ t min_(x,z,t)∈ Tα^⊤x+β z+γ t with α∈ℝ^n, β∈ℝ and γ∈ℝ. Obviously if (<ref>) has an optimal solution (x^*,z^*,t^*) with z^*∈{0,1}, then it is also an optimal solution for (<ref>). We then show that whenever (<ref>) admits an optimal solution, there exists one with z binary. And if no optimal solution exists, then both problems (<ref>)-(<ref>) are unbounded. * If γ<0, then setting x=0, z=0 and considering t→ +∞, we see that problems (<ref>)-(<ref>) are unbounded. * If γ=0 and α=0, both problems (<ref>)-(<ref>) admit an optimal integral solution of the form (0,z^*,0) with z^*∈{0,1} optimal solution of min_z∈ [0,1]β z. * If γ=0 and α_j≠ 0 for some j∈ [n], consider the points of the form (σe_j,1,λ_i,jσ^2) with σ∈ℝ. They all belong to the sets Y_0,Q and T. Considering then σ→±∞ (depending on the sign of α_j), we obtain that the problems (<ref>)-(<ref>) are unbounded. We can assume that γ>0 since (<ref>) trivially has a binary solution if γ=0 and α=0, or both problems are unbounded (for any other combination of parameters with γ≤ 0). Moreover, by scaling, we can suppose that γ=1. Then, assume that (<ref>) has an optimal solution (x^*,z^*,t^*) with 0<z^*<1. The point (x^*,z^*) is an optimal solution of min_(x,z)∈ℝ^n× [0,1] q(x,z) with q(x,z)=α^⊤x+β z+ Q^1/2x_2^2 + (1-z) (a^⊤x)^2/1+ z Q^-1/2a_2^2. Fixing z in (<ref>) and using the first order optimality conditions, we deduce the following expression of an optimal solution x(z) of min_x∈ℝ^n q(x,z): x(z)=-1/2Q^-1α + 1-z/2( 1+Q^-1/2 a_2^2)Q^-1aa^⊤Q^-1α. Thus, problem (<ref>) reduces to min_z∈ [0,1] q(x(z),z). Substituting x(z) by its expression (<ref>) in (<ref>), we obtain that q(x(z),z) is a linear function of z. To be more precise, after computations, we get the following expression. q(x(z),z)=β z - 1/4Q^-1/2α_2^2+ (a^⊤Q^-1α)^2/4( 1+Q^-1/2a_2^2)(1-z). Thus, (<ref>) admits an optimal solution with z∈{0,1}, concluding the proof. The next result provides an alternative SOCP representation of (Y_0,Q) in an extended space. The convex hull of Y_0,Q is described (in an extended space) by t≥τ_1+τ_2 Lu + sQ^-1a = x u_2^2 ≤τ_1/1+Q^-1/2a_2^2 s^2≤τ_2 z/1+Q^-1/2a_2^2 x,u∈ℝ^n, τ_1,τ_2∈ℝ_+, z∈ [0,1], s, t∈ℝ where L∈ℝ^n× n is a lower triangular matrix satisfying L L^⊤=( 1+Q^-1/2a_2^2) Q^-1 - Q^-1aa^⊤Q^-1. By Proposition <ref>, Y_0,Q can be convexified using an auxiliary matrix W satisfying W= Q^-1 - (1-z)/1+ Q^-1/2a_2^2Q^-1aa^⊤Q^-1 [ 𝐖 𝐱; 𝐱^⊤ t ]≽0 We may rewrite (<ref>) as W= 1/1+Q^-1/2a_2^2[ ( 1+Q^-1/2a_2^2) Q^-1 - Q^-1aa^⊤Q^-1] + z/1+Q^-1/2a_2^2Q^-1aa^⊤Q^-1 and note that for all x∈ℝ^n∖{0} x^⊤[( 1+Q^-1/2a_2^2) Q^-1 - Q^-1aa^⊤Q^-1]x = Q^1/2x_2^2 ( 1+Q^-1/2a_2^2) -(a^⊤Q^-1x)^2 ≥ Q^1/2x_2^2 >0 where the first inequality follows from Cauchy-Schwarz inequality. Thus, the matrix ( 1+Q^-1/2a_2^2) Q^-1 - Q^-1aa^⊤Q^-1 is positive definite. In particular, according to (<ref>), W is a conic combination of two given positive semidefinite matrices (where the coefficients of the conic combination may involve the binary variable z). It follows from <cit.>, that the system (<ref>) is SOCP representable: letting L∈ℝ^n× n such that L L^⊤=( 1+Q^-1/2a_2^2) Q^-1 - Q^-1aa^⊤Q^-1 (obtained for example from a Cholesky decomposition), the system (<ref>) can be represented with additional variables τ_1,τ_2∈ℝ_+, s∈ℝ and u∈ℝ^n as t≥τ_1+τ_2 Lu + sQ^-1a = x u_2^2 ≤τ_1/1+Q^-1/2a_2^2 s^2≤τ_2 z/1+Q^-1/2a_2^2 Intuitively, since terms (1-z)(a^⊤ x)^2 do not admit a good convex reformulation (Proposition <ref>), the key is to instead use the non-convex reformulation r(x)=(1-z)(a^⊤x)^2/1+zQ^-1/2a_2^2. To illustrate, consider the case with n=1 and a=1, that is, Y_0,λ = {(x,z,t)∈ℝ×{0,1}×ℝ t≥λ x^2+ x^2(1-z) }, and (Y_0,λ) = {(x,z,t)∈ℝ×[0,1]×ℝ t≥λ x^2 + (1-z)x^2/1+z/λ}, where λ>0 is a parameter that controls the magnitude of the quadratic term. Figure <ref> (top) depicts the graphs of the convex envelopes t= λ x^2 + (1-z)x^2/1+z/λ for various values of λ. Moreover, Figure <ref> (bottom) depicts the graphs of the non-convex reformulation r= (1-z)x^2/1+z/λ for the associated values of λ. Note that r can also be interpreted as the quantity added to the relaxation induced by big-M relaxations such as (<ref>), which discard terms associated with x^2(1-z) altogether. We observe that larger improvements over big-M relaxations are achieved for larger values of parameter λ. §.§ Convexification for the general case We now consider the non-homogeneous case where c≠ 0. We could not establish a simple description of (Y_c,Q) in the original space of variables. Moreover, while relaxations of Y_c,Q can be derived from Proposition <ref> by writing Y_c,Q= {(x_0,x,z,t)∈ℝ^n+1×{0,1}×ℝ t≥x^⊤Qx+ (cx_0-a^⊤x)^2(1-z), x_0=1 }, we found in preliminary computations that the resulting convexifications (which do not account for constraint x_0=1) could be much weaker. Fortunately, as we show in this section, (Y_c,Q) admits an easy representation with the introduction of an additional variable. Observe that set Y_c,Q can be written as projection onto the (x,z,t) space of Ŷ_c,Q = {(x,z,t,w)∈ℝ^n×{0,1}×ℝ^2 t≥x^⊤Qx+ (c+w-a^⊤x)^2, w(1-z)=0 }. Indeed, if z=0, then w=0 and Y_c,Q and Ŷ_c,Q coincide. On the other hand, if (x,1,t)∈ Y_c,Q, then (x,1,t,a^⊤ x-c)∈Ŷ_c,Q. We now characterize cl conv(Ŷ_c,Q). Let L∈^n× n be any matrix such that LL^⊤=(Q+aa^⊤)^-1, obtained for example from a Cholesky decomposition. The closure of the convex hull of set Ŷ_c,Q is (Ŷ_c,Q)= {(x,z,t,w)∈ℝ^n×[0,1]×ℝ^2 t≥ c^2+2c(w-a^⊤ x) +L^-1(x-Q^-1a/1+Q^-1/2a_2^2w)_2^2+w^2/(1+Q^-1/2a_2^2)z}. In the proof, first we compute (Ŷ_c,Q) in an SDP-representable extended formulation, then we simplify to a lower-dimensional SOCP-representable set, and finally we project out all additional variables. SDP-representable formulation Observe that x^⊤Qx+ (c+w-a^⊤x)^2=c^2+2c(w-a^⊤x) + (x^⊤ w) Q_1[ x; w ] with Q_1= ( [ Q+a a^⊤ -a; -a^⊤ 1 ]). Define Q_0=( [ Q+a a^⊤ 0; 0^⊤ 0 ]). Then a description of (Ŷ_c,Q) in an extended formulation is <cit.> (Ŷ_c,Q)= {(x,z,t,w)∈ℝ^n+3 ∃W∈^(n+1)× (n+1),τ∈ s.t. t≥ c^2+2c(w-a^⊤x) + τ ([ τ x^⊤ w; x 2c2*W; w 2c; ])≽ 0 (z,W)∈(P) }, where P={(0,Q_0^†), (1,Q_1^†) } and Q_i^† denotes the pseudoinverse of Q_i. Clearly, (P)={(z,W)∈ [0,1]×^(n+1)× (n+1): W=(1-z)Q_0^†+zQ_1^†}. SOCP-representable formulation Note that expressions of Q_0^† and Q_1^† can be easily computed <cit.>: Q_0^†=([ (Q+a a^⊤)^-1 0; 0^⊤ 0 ])=( [ Q^-1- Q^-1a a^⊤Q^-1/1+Q^-1/2a_2^2 0; ; ; 0^⊤ 0 ]) Q_1^†=Q_1^-1=( [ Q^-1 Q^-1a; ; ; a^⊤Q^-1 1+Q^-1/2a_2^2 ]). Therefore, we find that constraint W=(1-z)Q_0^†+zQ_1^† simplifies to W = ( [ Q^-1- Q^-1a a^⊤Q^-1/1+Q^-1/2a_2^2 0; ; ; 0^⊤ 0 ])+ z( [ Q^-1a a^⊤Q^-1/1+Q^-1/2a_2^2 Q^-1a; ; ; a^⊤Q^-1 1+Q^-1/2a_2^2 ]) =U+ z vv^⊤ where U= ( [ Q^-1- Q^-1a a^⊤Q^-1/1+Q^-1/2a_2^2 0; ; ; 0^⊤ 0 ]) v= ( [ Q^-1a/√(1+Q^-1/2a_2^2); ; √(1+Q^-1/2a_2^2) ]). Moreover, the system ∃W∈^(n+1)× (n+1) s.t. ([ τ x^⊤ w; x 2c2*W; w 2c; ])≽ 0, W=U+zvv^⊤ can be reformulated as an SOCP <cit.>. Letting L∈ℝ^n× n such that L L^⊤= (Q+aa^⊤)^-1 (obtained for example from a Cholesky decomposition), then point (x,z,w) satisfies constraints (<ref>) if and only if there exists τ_1,τ_2∈ℝ_+, s∈ℝ and u∈ℝ^n such that the constraints τ= τ_1+τ_2 L u + Q^-1a/√(1+Q^-1/2a_2^2)s = x s√(1+Q^-1/2a_2^2) =w u_2^2≤τ_1 s^2 ≤τ_2 z are satisfied. Projection In system (<ref>), we can project out τ=τ_1+τ_2, s=w/√(1+Q^-1/2a_2^2) and u=L^-1(x-Q^-1a/1+Q^-1/2a_2^2w), which by replacing in (<ref>) results in the formulation (Ŷ_c,Q)= {(x,z,t,w)∈ℝ^n+3 ∃τ_1,τ_2∈_+ s.t. t≥ c^2+2c(w-a^⊤x) + τ_1+τ_2 L^-1(x-Q^-1a/1+Q^-1/2a_2^2w)_2^2≤τ_1 w^2/(1+Q^-1/2a_2^2)≤τ_2 z }. In order to satisfy the first inequality constraint, we can assume that τ_1 and τ_2 are set to their lower bounds, concluding the proof. Theorem <ref> reveals an interesting connection between (Ŷ_c,Q) and the perspective reformulation. Indeed, letting e_n+1 denote the (n+1)th standard basis vector of ^n+1, one can rewrite the quadratic expression in (<ref>) as (x^⊤ w) Q_1[ x; w ]=δ w^2+(x^⊤ w) (Q_1-δe_n+1e_n+1^⊤) [ x; w ], where δ≥ 0 and Q_1-δe_n+1e_n+1^⊤≽ 0, and then reformulate term δ w^2 as δ w^2/z. From Theorem <ref>, we see that this reformulation is indeed ideal if δ is maximal, and the theorem provides a closed-form expression for the resulting Q_1-δe_n+1e_n+1^⊤ (that depends on the factorization LL^⊤). § APPLICATION TO LTS In this section, we use the convexification results in <ref> to obtain conic reformulations of (<ref>), or equivalently, problem (<ref>). §.§ The Big-M formulation The starting point for the formulations presented is the big-M formulation Big-Mmin_x,z,w ∑_i=1^m(y_i+w_i-a_i^⊤ x)^2+λTx_2^2 s.t. 1^⊤ z≤ m-h -Mz≤w≤ Mz x∈ℝ^n, z∈{0,1}^m, w∈^m, where M is a suitably large number. Observe that while the formulation is different from the original big-M formulation (<ref>) proposed in <cit.>, they are equivalent in terms of strength. Indeed, variables u in (<ref>) corresponds to terms |y_i+w_i-a_i^⊤ x| in (<ref>), and the absolute values of variables w in (<ref>) can be interpreted as the slacks associated with constraints |y_i-a_i^⊤ x|-u_i≤ Mz_i in (<ref>). We point out that formulation (<ref>) was the basis for the solution approach in <cit.> for problems with both outliers and sparsity. Indeed, the authors proposed to directly add constraints of the form -M̅ζ≤x≤M̅ζ, 1^⊤ζ≤ k and ζ∈{0,1}^n to (<ref>) – note that in <cit.>, the regularization term λTx_2^2 appeared in as constraint instead of as a penalty. §.§ The simple conic reformulation Observing that the objective of (<ref>) can be written as ∑_i=1^m((y_i+w_i-a_i^⊤ x)^2+λ/mTx_2^2), we use Theorem <ref> to independently reformulate each term in the sum, resulting in the formulation conicmin_x,z,w y_2^2-2y^⊤( A x-w)+∑_i=1^mL_i^-1(x-(m/λ)(T^⊤ T)^-1a/1+(m/λ)(T^⊤ T)^-1/2a_2^2· w_i)_2^2 +∑_i=1^m1/1+(m/λ)(T^⊤ T)^-1/2a_2^2·w_i^2/z_i s.t. 1^⊤ z≤ m-h x∈ℝ^n, z∈{0,1}^m, w∈^m, where matrices L_i satisfy L_iL_i^⊤=((m/λ)T^⊤ T+a_ia_i^⊤)^-1. Formulation (<ref>) does not use the big-M constraints -Mz≤w≤ Mz, as the conic terms w_i/z_i enforce the same logical relationship. Observe that since terms x_i^2/z_i can be reformulated as SOCP-constraints <cit.>, and every other term is either convex quadratic or linear, formulation (<ref>) can be easily used with mixed-integer SOCP solvers. §.§ The stronger conic reformulation The observation motivating the stronger conic reformulation is that, given any collection of matrices {Q_i}_i=1^m such that Q_i≻ 0 and ∑_i=1^mQ_i=λT^⊤ T, we can rewrite the objective of (<ref>) as ∑_i=1^m((y_i+w_i-a_i^⊤ x)^2+x^⊤ Q_ix) and then apply Theorem <ref>. The simple conic reformulation is a special case of such a convexification, with Q_i=(λ/m)T^⊤ T for all i∈ [m], but other choices of collection {Q_i}_i=1^m may result in stronger formulations. We now discuss how to find a collection {Q_i}_i=1^m resulting in better relaxations. We use the intuition provided in Remark <ref> and similar ideas to <cit.> to derive the formulation. Observe that given any collection {Q_i}_i=1^m, the relaxation is of the form min_x,z,w y_2^2-2y^⊤(Ax-w)+[ x^⊤ w^⊤ ]Σ[ x; w ] + ∑_i=1^m d_iw_i^2/z_i ∑_i=1^mz_i≤ m-h x∈ℝ^n, z∈ [0,1]^m, w∈ℝ^m, where Σ≽ 0 and d≥0. Moreover, we find that Σ=[ A^⊤ A+λT^⊤ T -A^⊤; -A I-Diag(d) ]. Thus, the continuous relaxation of the stronger conic reformulation is given by conic+max_d,Σ min_(x,z,w)∈^n+2m z_1≤ m-h 0≤z≤1y_2^2-2y^⊤(Ax-w)+[ x^⊤ w^⊤ ]Σ[ x; w ] + ∑_i=1^m d_iw_i^2/z_i Σ=[ A^⊤ A+λT^⊤ T -A^⊤; -A I-Diag(d) ]≽ 0 d∈_+^m, Σ∈ℝ^(n+m)×(n+m). Observe that in formulation (<ref>), constraints z∈{0,1} were relaxed to bound constraints, hence it is a relaxation of (<ref>). The MIO version corresponds to fixing d and Σ to the optimal values of (<ref>), and adding back constraints z∈{0,1}^m. We now discuss how to compute an optimal solution d^* of (<ref>) – note that Σ^* is immediately implied from the value of d^*. Given any fixed (x̅,z̅,w̅)∈^n× [0,1]^m×^m, the choice of d that results in the best relaxation for that particular point (i.e., resulting in the largest objective value for that particular point with z fractional) is an optimal solution of the semidefinite optimization problem (where we remove terms that do not depend on d) max_d∈_+^m ∑_i=1^m w̅_i^2(1/z̅_i-1)d_i s.t. [ A^⊤ A+λT^⊤ T -A^⊤; -A I-Diag(d) ]≽ 0. While convex and polynomial-time solvable, problem (<ref>) can be difficult to solve, mainly due to the presence of the large-dimensional conic constraint (<ref>), on order (n+m) matrices. Fortunately, as Proposition <ref> below shows, problem (<ref>) can be reformulated using a lower dimensional conic constraint, on order n matrices. If A does not contain a row of 0s and u^* is optimal for the optimization problem min_u∈^m ∑_i=1^mw̅_i^2(1/z̅_i-1)1/u_i s.t. A^⊤ A+λT^⊤ T-A^⊤Diag(u)A≽ 0, u≥1, then d^*∈^m_+ such that d_i^*=1-1/u_i^* is optimal for (<ref>). From the generalized Schur complement <cit.>, we find that constraint (<ref>) is equivalent to I-Diag(d)≽ 0, (I-Diag(d))(I-Diag(d))^†A=A, and A^⊤ A+λT^⊤ T-A^⊤(I-Diag(d))^†A≽ 0. Constraint (<ref>) is equivalent to d≤1. Constraint (<ref>) is automatically satisfied if d<1, since in that case matrix (I-Diag(d))^† =(I-Diag(d))^-1. In general, however, Ω=(I-Diag(d))(I-Diag(d))^† is the diagonal matrix such that Ω_ii=1_{d_i<1}. Therefore, if d_i=1, then the i-th row of matrix Ω A is a row of 0s, and constraint (<ref>) cannot be satisfied in that case unless the i-th row of A is also 0. Finally, perform a change of variables u_i=1/1-d_i, well defined since d_i<1 holds. From constraints d≥ 0 we find u≥1. Problem (<ref>) reduces to max_u∈^m ∑_i=1^m( w̅_i^2(1/z̅_i-1)-w̅_i^2(1/z̅_i-1)1/u_i) s.t. A^⊤ A+λT^⊤ T-A^⊤Diag(u)A≽ 0, u≥1. The result then follows by removing terms in the objective not involving u. The assumption on A is almost without loss of generality, since it is invariably satisfied in practice. In the formulation in Proposition <ref>, the nonlinear objective terms can reformulated with the introduction of additional variables s∈_+^m and rotated cone constraints 1≤ s_iu_i. The formulation contains a similar number of variables as (<ref>), but if n≪ m the nonlinear conic constraints are much simpler, and as a consequence the resulting formulation is substantially faster (and less memory intensive as well). We propose a simple primal-dual method to solve (<ref>), summarized in Algorithm <ref>. The algorithm iterates between solving the inner minimization of (<ref>) to optimality (for fixed d and Σ), and moving towards the optimal of the outer maximization (for fixed x, z and w). Each minimization step requires solving an SOCP-representable problem, while each maximization step requires solving an SDP as outlined in Proposition <ref>. The final MIO can be solved with off-the-shelf mixed-integer SOCP solvers. §.§ Improving relaxations with reliable data In some situations, a decision-maker may have access to data that has been carefully vetted, and is known to be reliable. Obviously, in such situations, such data should not be discarded. Moreover, as we now discuss, it is possible to leverage such data to further improve the relaxations. Suppose that the first m_0 datapoints are known to not contain outliers. Then, (<ref>) simplifies to min_x,z,w ∑_i=1^m_0(y_i-a_i^⊤ x)^2+∑_i=m_0+1^m(y_i+w_i-a_i^⊤ x)^2+λTx_2^2 s.t. 1^⊤ z≤ m-h -Mz≤w≤ Mz x∈ℝ^n, z∈{0,1}^m-m_0, w∈^m-m_0. Expanding the error terms of first m_0 points, we may rewrite the objective as ∑_i=1^m_0(y_i^2-2y_ia_i^⊤ x)+∑_i=m_0+1^m(y_i+w_i-a_i^⊤ x)^2+x^⊤(λT^⊤ T+∑_i=1^m_0a_ia_i^⊤)x. In other words, matrix λT^⊤ T+∑_i=1^m_0a_ia_i^⊤ can be treated as the “regularization" matrix and used throughout the conic formulations instead of λT^⊤ T, resulting in stronger formulations. Note that even if no reliable data is available, the ideas here can still be used to improve algorithms. For example, in a branch-and-bound search, at any given node some subset of variables z may have been fixed to 0. Thus, we may use the ideas in this subsection to improve the relaxations for the subtree emanating from that node. Doing so, however, would require a large degree of control of the branch-and-bound algorithm, which is not possible for several off-the-shelf branch-and-bound solvers. §.§ Intercept The presence of the strictly convex term Tx_2^2 is critical for the design of strong convex relaxations (Proposition <ref>). However, the presence of an intercept variable might hamper the exploitation of the regularization term. Indeed, while the intercept is often subsumed into matrix A (as a column of 1s), the regularization term rarely involves the intercept variable. Indeed, writing (<ref>) while making the intercept variable x_0 explicit results in min_x_0,x,z,w ∑_i=1^m(y_i+w_i-x_0-a_i^⊤ x)^2+λTx_2^2 s.t. 1^⊤ z≤ m-h -Mz≤w≤ Mz (x_0,x)∈ℝ^n+1, z∈{0,1}^m, w∈^m, where the intercept x_0 does not appear in term Tx_2^2, and thus this term is not strictly convex. Observe that the quadratic objective function is rank-deficient, as the rank of the quadratic function is at most m+n, while there are n+m+1 variables (x_0,x,w). As a consequence the conic formulations may be ineffective, e.g., feasible solutions of optimization (<ref>) may require d_i=0 for at least some index i. We propose three workarounds to resolve the difficulties posed by the intercept. The first one, which is the ideal solution, is to use reliable data as discussed in <ref>: a single datapoint known to be reliable will allow for the application of the conic formulations. The second approach, which is common in practice, is to standardize y so that it has 0-mean, and fix x_0=0. In other words, fix the intercept to be the mean of the response variable. The third approach is to artificially create a strictly quadratic term involving the intercept as a regularization. In particular, given a baseline value c̅_0 for the intercept (e.g., obtained from a heuristic solution), add a regularization term λ (x_0-c̅_0)^2 to the objective, penalizing departure of x_0 from this baseline. Naturally, the addition of this regularization may prevent the resulting formulation from finding optimal solutions of (<ref>). Nonetheless, in our computations, we found that the solutions obtained from the conic formulations are still high quality even if the baseline value c̅_0 is poorly chosen. § COMPUTATIONS In this section, we discuss computations with synthetic and real data. First, in <ref>, we discuss the different methods compared. Then in <ref> we provide a high level summary of our computational results, in <ref> we discuss experiments with synthetic data, used to validate the statistical merits of the approach, and in <ref> we provide experiments with real datasets. §.§ Methods tested We compare the three MIO formulations presented in <ref> for regression problems with outliers. The big M method, as discussed in <ref>. The simple conic reformulation, as discussed in <ref>. The stronger conic reformulation, as discussed in Algorithm <ref> in <ref>. The formulation for the least quantile of squares () given in <cit.>. Recall that the minimizes the h-th order statistic, i.e., solves the optimization problem min_x∈ℝ^n r_(h)(x)^2 where |r_(1)(x)|≥|r_(2)(x)|≥…≥|r_(m)(x)| are the residuals sorted in nonincreasing magnitude order. The formulation, which is based on big-M constraints, is described in Appendix <ref>. In addition, we also compare the following commonly used methods. Simply solving problem (<ref>), as described in <ref>, without accounting for outliers. Solving the least absolute deviation problem (<ref>). Heuristic that alternates between optimizing regression coefficients x given a fixed set of m-h discarded observations (by fitting an regression), and optimizes which m-h observations to discard (encoded by z) given fixed regression coefficients (a process called a C-step in <cit.>). We set the initial regression coefficients to be those obtained from (<ref>). We point out that we also attempted to implement the MIO formulation of <cit.> for least quantile regression. However that formulation, which includes three different sets of big-M constraints, resulted in numerical issues in most of the instances (with the solver producing “optimal" solutions that are not feasible for the MIO, and are extremely poor estimators). In any case, as mentioned in <ref>, the authors in <cit.> comment that the solutions produced by the MIO formulation are much worse than heuristic solutions (unless warm-started with such solutions, in which case the quality of solutions produced by MIO matches the heuristics). In all cases we use the ridge regularization T=I. We use Gurobi solver 9.5 to solve all (mixed-integer) linear, quadratic or second order cone optimization problems, and solver Mosek 10.0 to solve SDPs. All computations are done on a laptop with a 12th Gen Intel Core i7-1280 CPU and 32 GB RAM. We set a time limit of 10 minutes for all methods (for , this time includes both solving SDPs and a MIO), and use the default configuration of the solvers in all cases. All instances are standardized, that is, the model matrix A is translated and scaled so that ∑_i=1^m A_ij=0 and ∑_i=1^m A_ij^2=1 for all j∈ [n]; similarly, ∑_i=1^m y_i=0 and ∑_i=1^m y_i^2=1. §.§.§ Implementation details and numerical considerations We now discuss how we select and tune parameters for the different methods, as well as discuss potential issues if the parameters are poorly chosen. Formulation (<ref>) depends on the parameter M. If a small value is chosen, then the formulation might remove optimal solutions. If the value chosen is too large, then numerical issues can be encountered: for example, the solver might set z_j=10^-5 for some j (which is interpreted as 0 due to numerical precision of solvers) but set w_j to a large value, while satisfying constraint |w_j|≤ Mz_j. In our experiments we set M=1,000, and we did not observe any numerical issue in our experiments. This parameter was not tuned. Formulation (<ref>) does not involve any parameter, however it requires to have λ>0 and may result in incorrect behavior if λ→ 0. Moreover, based on past experience by the authors, mixed-integer SOCP formulations may result in poor performance or numerical difficulties in large problems. Our experiments satisfy λ≥ 0.01 and we did not observe any numerical issues. To handle the intercept (recall the discussion in <ref>), we tested both fixing x_0=0, or using the intercept x̅_0 produced by as a proxy, and adding the regularization term λ (x̅_0-x_0)^2 to the objective (where λ is the same coefficient as the one appearing in term λx_2^2). We did not observe major differences between the two approaches, in terms of quality or solution times. In our experiments with real data we set x_0=0, and in our experiments with synthetic data we use the value of as a proxy (not that the synthetic experiment includes the instances where performs worse). As the most sophisticated formulation, there are several implementation details associated with method . First, note that if an optimal solution of problem (<ref>) satisfies u_i^*=1 for some index i∈ [m], then d_i^*=0 and formulation (<ref>) does not include term w_i^2/z_i (thus the MIO formulation is not exact, but rather a relaxation). Thus, in our computations, we set a constraint u_i≥ 1.001. We did not tune this lower bound, although we noted that simply setting u_i≥ 1 does indeed result in incorrect results. We now discuss the termination criterion of Algorithm <ref>. Note that at each iteration, at line <ref> of the algorithm, a lower bound on the optimal objective value of (<ref>) is computed. Moreover, given the solution to the relaxation (x̅, z̅,w̅), we can compute an upper bound on the optimal objective value using a rounding heuristic, by setting z_j=1 for indexes corresponding to the m-h largest values of z, and then solving (<ref>) with z fixed. Neither the sequence of lower and upper bounds produced by the algorithm is guaranteed to be monotonic, so we track the best lower (LB) and upper bound (UB) found throughout all previous iterations, and compute an optimality gap at any given iteration as gap=(UB-LB)/UB. Finally, we stop the algorithm after 20 iterations (not necessarily consecutive) in which the gap improvement from one iteration to the next is less than 10^-6. The parameters (20, 10^-6) were tuned minimally, based on one synthetic instance and one real instance with the alcohol dataset (and on these datasets we did not observe major differences for different choices of parameters). In terms of numerical difficulties, in addition to those already mentioned for the formulation, method requires solving several SDPs with low-dimensional cones. Certainly, SDPs are inherently more difficult than quadratic or conic quadratic problems, more sensitive to the input data and more prone to numerical instabilities. For example, we observed that if the raw data (A,y) is used without standardization, the SDP solver encounters numerical difficulties in several of the instances. In our experiments, with standardized data, we did encounter numerical issues in a single instance (out of over 400). The intercept is handled similarly to . The MIO formulation proposed in <cit.> uses three sets of big-M constraints and, similar to method , is susceptible to error if the big-M values are poorly chosen. We use the approach suggested in <cit.> to handle the big-M constraint, by modeling them as SOS1 constraints (which essentially lets the solver decide the big-M value to be used). We often encountered numerical issues (discussed in detail in <ref>), with the solver returning solutions which it claims as optimal, but are in fact infeasible for the MIO. §.§.§ A note on additional improvements We point out that all methods presented here can be further improved. For example, one might use the solution of any of the heuristic methods as warm start for the MIO, and after run heuristic starting from the solution produced by the MIO (assuming the time limit was reached, in which case the solution of the MIO might not be optimal), which will produce a solution that is at least as good as the solutions obtained from either solving the MIO or using heuristic independently. We do suggest practitioners to use such improvements in practice. However, we point out that our objective in the paper is not to propose an algorithm that is “best" for the LTS problem, but rather to evaluate the strength of the conic formulations presented (which, as pointed out in <ref>, might be used as building blocks for other optimization problems). By not including additional improvements, we ensure that the differences in computational performance between the MIO methods is entirely due to the formulations used. §.§ Summary of results We first provide a summary of the results in our computations. ∙ In computations with both synthetic and real datasets, formulations and are orders-of-magnitude faster than . ∙ In computations with both synthetic and real datasets, heuristic is very fast and delivers optimal solutions in a good proportion of instances, but produces extremely poor solutions in the remaining instances (often worse than solutions obtained by not handling outliers at all). The exact formulation consistently produces high-quality solutions (even in instances that are not solved to optimality). ∙ In our computations, formulation solves all synthetic instances in a few seconds –including instances with (n,m)∈{20,500}– but fails to solve within the time limit real instances with (n,m)∈{10,60}. Therefore, we advocate for departing from the common practice in the statistical and machine learning literature to use synthetic instances to evaluate the scalability of exact methods for (<ref>) or related problems. §.§ Experiments with synthetic data We now discuss experiments with synthetic data. First we discuss the instance generation process in <ref> and the relevant metrics in <ref>, then we present computational performance in <ref> and statistical results in <ref>. We point out that the focus in this section is the statistical performance of the different methods. §.§.§ Instance generation Our instance generation process follows closely the generation process in <cit.>, which in turn was inspired by <cit.>. Given parameters n and m, each entry of the matrix A is generated iid as A_ij∼𝒩(0,100). Moreover, we generate a “ground-truth" vector x^*=1, and responses as y=Ax+ϵ, where each entry of ϵ is generated iid as ϵ_i∼𝒩(0,10). Given a proportion τ of outliers, we randomly choose ⌊τ m⌋ of the observations as outliers and modify the associated responses as y_i← y_i+1,000. Finally, data is standardized. In all our experiments, we set the budget of outliers m-h to be equal to the proportion of outliers ⌊τ m⌋ (as done in <cit.>). §.§.§ Metrics In this section we compare the statistical benefits of the different methods. To assess the quality of a given method, we compare two metrics. Given an estimate x̂ for a given method, the relative risk defined as =x^*-x̂_2^2/x^*_2^2, where x^*=1 is the ground truth, measures the error of the estimated coefficients. A perfect prediction results in =0, the naive predition x̂=0 results in =1, and method with low breakdown point (e.g., ) may result in arbitrarily large values of . Given a set of candidate outliers encoded by an indicator vector ẑ, the recall defined as =|{i∈ [m]: ẑ_i=1 and i is an outlier}|/⌊τ m⌋ measures the proportion of outliers that are correctly identified. Note that is not defined for and , since those methods do not explicitly identify outliers. §.§.§ Computational performance We test small instances for all combinations of parameters n∈{2,20}, m∈{100,500}, λ∈{0.01,0.1,0.2,0.3} and τ∈{0.1,0.2,0.4}. For each combination of parameters, we generate five instances. Heuristics such as and are solved in a fraction of a second. The results for MIO formulations are summarized in Figure <ref>. We observe that method can solve 50% of the instances in under 10 minutes, a performance comparable with the one reported in <cit.> (although the data generation process is different). Methods and are much faster. In particular, can solve all instances in less than 11 seconds, resulting in at least a two-orders-of-magnitude speedup over . As we observe in our computations with real datasets (see <ref>), the results here are not representative of the actual performance of the methods in practice. Therefore, we do not provide detailed computational results in this section. We simply comment on the effect of parameters τ and λ: instances with small number of outliers ⌊τ m⌋ are much easier to solve (most of instances solved to optimality by correspond to small values of τ), and formulation benefits from larger values of regularization λ as well. Finally, the continuous relaxation of is strong regardless of the combination of parameters, and most of the instances are solved at the root node. §.§.§ Statistical results We now present the statistical performance for different methods for parameters (n,m)∈{(2,100),(20,100),(20,500)}. We omit results for methods and , since delivers similar solutions much faster. In Table <ref>, we compare the performance of , and methods , and with parameter λ=0.01. The results for m=100 are also summarized in Figure <ref>. Table <ref> shows the effect of varying the parameter λ for the relative risk of estimators , and . Relative risk for , and . Each row represents the average over five instances generated with identical parameters. 2*n 2*m 2*τ 3c|λ=0.01 3c|λ=0.1 3c|λ=0.2 3cλ=0.3 2 100 0.1 0.001 0.001 6.387 0.009 0.009 5.316 0.030 0.030 4.420 0.058 0.058 3.739 2 100 0.2 0.001 0.001 7.443 0.011 0.011 6.332 0.036 0.036 5.382 0.067 0.067 4.645 2 100 0.4 0.001 0.001 23.582 0.015 0.015 19.901 0.051 0.051 16.800 0.096 0.096 14.393 20 100 0.1 0.001 0.001 12.262 0.017 0.017 9.165 0.050 0.050 7.904 0.087 0.096 5.747 20 100 0.2 0.002 0.001 19.388 0.023 0.023 14.848 0.063 0.063 11.609 0.107 0.107 9.410 20 100 0.4 0.004 109.979 26.580 0.070 23.527 20.398 0.115 0.115 15.965 0.178 0.178 12.944 20 500 0.1 0.000 0.000 1.631 0.011 0.011 1.360 0.035 0.035 1.147 0.067 0.067 0.997 20 500 0.2 0.000 0.000 3.234 0.013 0.013 2.676 0.043 0.043 2.228 0.079 0.079 1.900 20 500 0.4 0.001 0.001 4.695 0.022 0.022 3.927 0.067 0.067 3.288 0.117 0.117 2.807 We note that estimator results in the best risk and recall in all instances considered, and is the only estimator which does not break down in instances with n=20, m=100 and τ=0.4 (i.e., instances with the smallest signal-to-noise ratio and larger number of outliers among those considered). We observe that formulation in general produces poor solutions, as expected. Indeed, with a breakdown point of 0, the presence of a single outlier could result in arbitrarily poor solutions, and the instances used contain several outliers. Interestingly, while formulation also has a breakdown point of 0, it produces good solutions in most of the instances considered (although even in those instances, the relative risk can be five times more than the risk of other robust approaches). However, in instances with n=20, m=100 and τ=0.4, the estimator breaks down and results in extremely poor solutions, worse in fact than those produced by estimator which ignores outliers. Heuristic matches the performance of in instances where (n,m,τ)≠(20,100,0.4) –in fact, the heuristic finds optimal solutions in all these instances–, but fails dramatically in the setting (n,m,τ)=(20,100,0.4): the solutions produced are poor local minima of (<ref>), and the statistical properties are in fact worse than and . We see from Table <ref> that as the regularization parameter λ increases, the performance of improves (showcasing how the ℓ_2 regularization induces robustness) but is still substantially worse than . On the other hand, we see that an increase of the regularization parameter results in worse performance for , but the risk remains low for all combinations of regularization tested. Finally we observe that in instances with (n,m,τ)=(20,100,0.4), an increase of regularization results in much better performance for heuristic , and the estimator does not break down if λ≥ 0.2. However, the resulting risk is larger than the risk of solutions produced by with smaller values of regularization parameter. §.§ Experiments with real data We now discuss experiments with real data. First we present the instances used in <ref> and the metrics tested in <ref>, and then discuss computational times in <ref> and solution quality in <ref>. §.§.§ Instances To test the methods we use instances included in the software package “robust base" <cit.>. Specifically, we select the instances that: (i) are regression instances (as opposed to classification), (ii) do not have missing data, and (iii) are not time series data. The resulting 17 datasets are summarized in Table <ref>. For each instance, we vary the parameter λ∈{0.05,0.1,0.2} and the proportion of allowed outliers m-h∈{⌊ 0.1m⌋,⌊ 0.2m⌋,⌊ 0.3m⌋,⌊ 0.4m⌋}, thus creating 12 different instances for each particular dataset. Note that we separate the instances in nine “easy" datasets (all satisfying m≤ 40) and eight “hard" datasets (with m>40). This distinction is based on the performance of the formulation: the average time to solve all instances for an easy dataset is less than 15 seconds, whereas for the hard datasets there is at least one instance that could not be solved to optimality within the time limit of 10 minutes. §.§.§ Metrics On real data, there is no “ground truth" concerning which points are outliers or the actual values of the regression coefficients. Thus, we limit our comparisons to the performance of MIO algorithms (as measured by time, nodes and optimality gap) and the quality of the solutions obtained in terms of the objective value of (<ref>) of and . §.§.§ Computational performance of MIO Similarly to results with synthetic instances, heuristics such as run in a fraction of a second in all cases. In computations with easy datasets, solves the instances in three seconds on average, and under 45 seconds in all cases. Formulation also requires three seconds on average as well (and 87 seconds in the worst-case), while formulation requires only one second (and under 10 seconds in the worst case). Note that easy datasets have m≤ 41, and full enumeration may be possible in most of the instances. In the interest of shortness, we do not present detailed results on the computations with easy datasets, and focus in this section in the more interesting computations of methods with hard datasets. Figure <ref> presents aggregated results for instances with hard datasets. In particular, it shows the percentages of instances solved by methods , and within any given time limit. Observe that the performance of all methods in instances with real datasets is worse than the one reported in synthetic instances, despite real datasets being in some cases smaller by an order-of-magnitude. This discrepancy of performance serves as compelling evidence that synthetic instances should not be used to evaluate the “scalability" of MIO methods for LTS or related problems in regression. Indeed, as is well-known in the MIO literature, size of an instance is often a poor proxy of its difficulty. We see that the formulation struggles, solving only 22% of the instances within the time limit of 600 seconds. Formulation is better across the board, requiring only 16 seconds to solve 22% of the instances, and managing to solve 35% of the instances overall. Formulation is worse than in the simpler instances, but much better in the more difficult ones, managing to solve over 45% of the instances. Indeed, in instances that can be solved easily by other methods, the additional cost of solving SDPs hurts the performance, but the stronger relaxation pays off in difficult instances. Table <ref> presents detailed results for each dataset as a function of the budget parameter m-h, and Table <ref> presents detailed results as a function of the regularization parameter λ. It shows for each dataset and budget/regularization parameter the average time and branch-and-bound nodes used by the solver (time-outs are counted as 600 seconds), as well as the end gaps as reported by the solver (instances solved to optimality count as 0%). We see that formulation can solve instances if the parameter m-h is small (and the number of feasible solutions m m-h is small as well) but struggles in other instances. Formulations and also perform better if the parameter m-h is small (since enumeration is more effective) or if the regularization parameter is large (since the relaxations are stronger). Formulation is competitive or better than in the smaller datasets such as alcohol, but is superior overall. §.§.§ Solution quality We now compare the best solutions found by formulation and heuristic in the real datasets. For each instance, we compute the gap of any method as Gap=ζ_method-ζ^*/ζ^*, where ζ_method is the objective value found by the method and ζ^* is the objective value of the best solution found for that instance (by any method). The results are presented in Figure <ref>. We see that produced worse solutions than in close to 40% of the instances, and in those instances the gaps are relatively large (9% on average, and has high as 50% in some instances). In contrast, delivers worse solutions in only 5.4% of the instances, and the gaps are relatively small in those instances (2% on average). We conclude that while finds optimal solutions (or at least as good as ) in a good portion of the instances, it may deliver poor quality solutions when it fails. In contrast, seems to be reliable in all cases (at the expense of additional computational time). § CONCLUSIONS We studied relaxations for a class of mixed-integer optimization problems arising often in statistics. The problems under study are characterized by products of binary variables with nonlinear quadratic terms. Few MIO approaches exist in the literature for the problems considered, and rely on big-M linearizations of the cubic terms, resulting in weak relaxations which provide trivial bounds only. In the paper, we derive the first big-M free relaxations of the problems considered, and our numerical studies with least trimmed squares instances confirm that the suggested relaxations are substantially better than the state-of-the-art. We hope that the study in the paper serves to pave the way for efficient solution of the problems considered via mixed-integer optimization. abbrv
http://arxiv.org/abs/2307.04396v1
20230710075746
Diffusion and fluctuations of open charmed hadrons in an interacting hadronic medium
[ "Kangkan Goswami", "Kshitish Kumar Pradhan", "Dushmanta Sahu", "Raghunath Sahoo" ]
hep-ph
[ "hep-ph", "hep-ex", "nucl-ex", "nucl-th" ]
http://arxiv.org/abs/2307.04360v1
20230710061554
Mean-field analysis of load balancing principles in large scale systems
[ "Illés Horváth", "Márton Mészáros" ]
math.PR
[ "math.PR", "60" ]
Probe hyperon electric dipole moments with full angular analysis Jianyu Zhang^1 August 12, 2023 ================================================================ Load balancing plays a crucial role in many large scale systems. Several different load balancing principles have been proposed in the literature, such as Join-Shortest-Queue (JSQ) and its variations, or Join-Below-Threshold. We provide a high level mathematical framework to examine heterogeneous server clusters in the mean-field limit as the system load and the number of servers scale proportionally. We aim to identify both the transient mean-field limit and the stationary mean-field limit for various choices of load balancing principles, compute relevant performance measures such as the distribution and mean of the system time of jobs, and conduct a comparison from a performance point of view. § INTRODUCTION For large scale service systems, where service resources (e.g. computing capacity) are distributed to several service units, load balancing plays a crucial role in distributing the total load of the system to ensure better overall service for the incoming tasks (jobs). There are many different types of load balancing principles. Static load balancing does not take into account the state of the system, instead aiming for a balanced distribution based purely on the incoming jobs. Static load balancing is in general easy to set up, requires minimal overhead communication and performs well when the incoming jobs have some regular patterns. However, in most systems the incoming jobs have some level of random variability. This situation is generally better handled by load balancing policies which take into account the current state of the system. Scheduling decisions may be based on different types of information, depending on what is available. In general, one of the most important parameters is the current load of the servers, as it is generally desirable to maintain a balanced load among all servers. If available, further information taken into account may include any of the following: * the servers may be heterogeneous, with faster and slower servers; * job and server types may be important in case the servers are heterogeneous and certain servers can serve certain types of jobs more efficiently; * job sizes may be used to compute current server load more precisely; * in some cases, physical location may play a role; * there may be bottlenecks other than computing capacity in the system (e.g. bandwidth). In many real-life systems, such information may not be available, but even if it is, there is a tradeoff: a complicated load balancing policy that requires too much communication and computation may generate a significant overhead cost, slowing down the entire system. Hence it is in general desirable to stick to simple load balancing policies. In the present paper, we provide a mathematical framework that does not include communication overhead costs. Such aspects can be addressed in the modeling in several ways; however, these are highly scenario-dependent, and as such, we decided to keep the model high-level. We will discuss load balancing policies based exclusively on the queue length of servers. Job types, physical location and other bottlenecks will not play a role. We allow a heterogeneous server cluster, where there are several different types of servers, and the model can also incorporate processor sharing, where a server can serve multiple jobs simultaneously. The server cluster model of the present paper will be described by a density-dependent Markov population model. As the system size goes to infinity, the mean-field limit of density-dependent Markov population models has been examined in the literature for both the transient regime (up to a finite time horizon) and in the stationary regime. The transient limit object is deterministic and can be described as the solution a system of ordinary differential equations (ODEs) in case the Markov transition rates are Lipschitz-continuous <cit.>, or as the solution of a differential inclusion in case the transition rates are discontinuous <cit.>. Overall, these results are relatively straightforward to apply for the model in the present paper. For the stationary regime, for Lipschitz-continuous transition rates, it is known that in the mean-field limit, the stationary distribution of the finite system concentrates on the unique asymptotically stable solution (attractor) of the limit system of ODEs <cit.>. Similar results available for the discontinuous setting, but only in case the attractor lies inside a domain where the transition rates are continuous <cit.>. We are not aware of any general results in case the attractor is at a discontinuity point of the transition rates, which happens to be the case for several of the load balancing policies discussed in the present paper. The contributions of the paper are the following: * Providing a high-level mathematical framework for modelling load balancing systems that accommodates several different load balancing principles. * Identification of the mean-field limit in both the transient and stationary regime. * Computation of the mean service time and also the service time distribution in the stationary mean-field limit. Computation techniques need to be adapted for discontinuities; these modified formulas are, to the best of our knowledge, novel. * Numerical comparison of the various load balancing principles via simulation and theoretical computations for the mean-field limit. All of the above is carried out for a fairly general setting, where the server cluster can be heterogeneous, and we will also allow a varying service rate, depending on the number of jobs in a given server. We will focus mostly on first-in-first-out (FIFO) service principle, but note that all calculations are straightforward to derive for limited processor sharing (LPS), where a server can serve multiple jobs simultaneously. Rigorous proofs are not the main focus of the paper. We do refer to relevant rigorous results from the literature in cases where they are available, but only provide heuristic arguments for the novel cases. That said, numerical analysis does support the heuristic computations of the paper. The codes used for the simulations and analytic calculations throughout the paper are available at <cit.>. The rest of the paper is structured as follows: the rest of this section is dedicated to an overview of load balancing in the literature (Section <ref>), and to the necessary mathematical background in queueing theory (Section <ref>) and population processes (Section <ref>). Section <ref> describes the general setup of the server cluster we are interested in. Section <ref> describes the various load balancing principles. Section <ref> contains numerical experiments and comparison of the various load balancing principles, and Section <ref> concludes the work. The Appendix addresses a few related questions not strictly part of the main body of work, and also some further details. §.§ Load balancing principles One of the classic dynamic load balancing policies is Join-Shortest-Queue (JSQ), where the incoming job is assigned to the server with the shortest queue (lowest number of jobs) <cit.>. The upside of this method is that it offers very even balancing for homogeneous server clusters. However, it requires up-to-date knowledge of all server states, which may require a significant communication overhead. Due to this, several variants of JSQ have been in use: for JSQ(d), the incoming job is scheduled to the shortest queue from among d servers, selected at random. This offers less balanced load distribution, but also requires less communication. d=1 corresponds to random assignment with no load balancing, and d equal to the total number of servers corresponds to JSQ; as d is increased, it offers better balancing but also more overhead communication. Interestingly, already for d=2, the resulting load balancing policy has certain asymptotic optimality properties <cit.>, often referred to as the power-of-2 (or power-of-d) policies. As a consequence, d is often selected relatively low, such as d=2 or d=5. For Join-Idle-Queue (JIQ), the incoming job is scheduled to an idle server at random; if there are no idle servers, the assignment is random among all servers. Once again, this offers less balanced load distribution and less communication overhead than JSQ, but, similar to JSQ(d), has some nice asymptotic optimality properties. Mean-field analysis has been carried out for JIQ in <cit.>. Another related load balancing policy is Join-Below-Threshold (JBT), which associates a threshold with each server; servers below their threshold are considered available and servers at or above their threshold are full. Jobs will be dispatched to a server randomly from among all available servers. This policy again offers less balancing than JSQ, but still offers protection against overloaded servers, and requires communication only when a server switches between available and full. For a full mean-field analysis and cluster optimization of JBT, we refer to <cit.>. §.§ Birth-death processes and queues The jobs arriving to and leaving a server's queue can be modelled with a birth-death process (Markov-queue). For technical simplicity, we resort to finite queues, with the maximal queue length denoted by B and state space of a single queue Ω = {0,1,2,…,B}. We assume Markov arrivals, that is, jobs arrive according to a Poisson process, and Markov service, that is, the time it takes to serve a job (once service has started) is exponentially distributed. There are multiple service principles. For First-In-First-Out (FIFO) service principle, the server always serves the first job of a queue, while the other jobs wait. Whenever the first job has finished service, the server immediately starts serving the next job in the queue. For Limited Processor Sharing (LPS), the server can work on multiple jobs simultaneously. The maximum number of jobs served simultaneously is called the multi-programming level (MPL); further jobs in the queue wait and enter service in a manner similar to FIFO. We allow the service rate to depend on the number of jobs in the queue (this is particularly relevant for LPS, where multiple jobs can be served jointly for more efficient service overall). The choice of service principle has no effect on the queue length changes (no matter which job is served, queue length decreases by 1), but it does affect the system time of individual jobs. We will mostly focus on FIFO. §.§ Density-dependent population processes In this section, we present mathematical background and framework for density-dependent Markov population processes. A density-dependent Markov population process has N interacting components, each of which is in a state from a finite set of local states S. The global state of the system is defined as the total number of individuals in each state, that is, a vector X^N∈{0,1,…,N}^|S| with X^N_1+…+X^N_|S|=N. The normalized global state of the system can be defined as x^N=X^N/N, so x^N∈ [0,1]^S with x^N_1+…+x^N_|S|=1. Each component acts as a continuous time Markov chain. The rate of the transition from i ∈ S to j ∈ S is r_ij^N (for i ≠ j). The rates are assumed to be density-dependent, that is r_ij^N = r_ij(x^N) for some function r_ij:[0,1]^|S|→[0,∞]. In the classic setup defined by Kurtz <cit.>, the functions r_ij are usually assumed to be Lipschitz-continuous and independent of N. With this setup, x^N(t) is a continuous time Markov-chain. We define the mean-field equation of the system as the following: /ṭv_i(t)=∑_j∈ S v_j(t)r_ji(v(t)), i∈ S, where r_ii:=-∑_j∈ S, j≠ ir_ij, and x_i^N(0)→ v_i(0) (for i=1,…, |S|), in probability as N→∞. Lipschitz-continuity guarantees existence and uniqueness of the solution of (<ref>). The following result of Kurtz states mean-field convergence in the transient regime <cit.>: Assuming r_ij (i,j∈ S), are Lipschitz-continuous and x_i^N(0)→ v_i(0) i∈{1,…,|S|} , in probability, then for any T>0 we have lim_N →∞P( sup_t ∈ [0,T]𝐱̅^N(t) - 𝐯(t) > ϵ) = 0. Kurtz also proved that the standard deviation of x^N is of order 1/√(N) <cit.>. An important concept related to Theorem <ref> is asymptotic independence, also known as propagation of chaos, stating that as N→∞, the evolution of two distinct queues is asymptotically independent. This is due to the fact that the evolution of a queue depends only on the global state, which is asymptotically deterministic. We also have stationary mean-field convergence. Given the following assumptions: * r_ij are Lipschitz-continuous, * the Markov process x^N(t) has a unique stationary distribution π^N for each N, and * (<ref>) has a unique stable attractor (ν_1,…,ν_|S|), we have that the probability measure π^N on S converges in probability to the Dirac measure concentrated on ν. Theorems <ref> and <ref> have been generalized in several directions during recent years. Benaïm and Le Boudec elaborated a framework applicable for a wider range of stochastic processes, which also allows the r_ij functions to have a mild dependency on N <cit.>. The condition on Lipschitz-continuity can also be weakened. For discontinuous r_ij's, (<ref>) turns into a differential inclusion. A formal setup for differential inclusions is quite technical, and is omitted from the present paper. For a fully detailed setup, we refer to <cit.>, specifically Theorems 4 and 5, and <cit.>, Theorem 3.5 and Corollary 3.9 for a corresponding version of Theorem <ref>. For a corresponding version of Theorem <ref> for discontinuous transition rates, we refer to <cit.>, where the main additional condition is that the unique attractor lies inside a domain where the r_ij are continuous. The applicability of Theorems <ref> and <ref> will be addressed more in Section <ref>. From Theorem <ref> it also follows that lim_N→∞E(π^N)= ν, so ν can be used as an approximation for E(π^N) for large N. E(π^N) here is basically an |S|-dimensional vector of distributions, which converges to a constant |S|-dimensional vector in distribution. The limit point can be interpreted as a distribution on S, and is the stable attractor ν. § SERVER CLUSTERS The server cluster model examined in the present paper consists of N servers, each with a finite buffer, and a single common dispatcher. Jobs arrive to the dispatcher according to a Poisson process with rate Nλ (that is, the average arrival rate is λ per server). Each arriving job is instantly dispatched to one of the N servers; that is, the dispatcher maintains no queue. The cluster may have K different server types. We assume K is fixed, independent from N. The servers within each type are identical. Buffer sizes are denoted by B^(k) for each type k∈{1,…,K}. We assume service times are exponentially distributed; for each server type, the service rate can be constant or it may depend on the current queue length of the server. Service rates are denoted by μ_i^(k), where i∈{ 0,1,…,B^(k)} is the queue length, and k∈{ 1,2,…,K} denotes the type of the server. For a given k∈{1,…,K}, μ_0^(k),…,μ_B^(k)^(k) is also referred to as the service rate curve. (μ_0^(k)=0, but we still include it in the notation.) For each service rate curve, it is natural to assume that the total rate increases with the queue length, but the per-job rate decreases with the queue length: μ_1^(k)≤μ_2^(k)≤μ_3^(k)≤…, μ_1^(k)≥μ_2^(k)/2≥μ_3^(k)/3≥… k∈{ 1,2,…,K} Due to the finite buffer sizes, data loss may occur whenever a job is dispatched to a full queue. The probability of a job loss will be typically very low (due to load balancing), but it is still something that we will address in due course. The server cluster is a density-dependent population process, where the state of a server is simply the number of jobs in its queue. The global state will be denoted by X_i^(k),N(t), ( 0≤ i≤ B^(k), 1≤ k≤ K ), where X_i^(k),N(t) is the number of servers with i jobs in its queue at time t. We will mostly use its normalized version x^N(t)=x_i^(k),N(t), ( 0≤ i≤ B^(k), 1≤ k≤ K), where x_i^(k),N(t)=X_i^(k),N(t)/N. The number of servers of type k is denoted by N_k and the ratio of each server type is denoted by γ_k^N=N_k/N, k=1,…,K. γ_k^N may depend on N, but we will assume they converge to some fixed values γ_k as N→∞. We also want the system to be stable, so λ < ∑_k=1^K γ_k^N μ_B^(k). (Actually, due to the finite buffer size assumption, the system is technically always stable, but we will nevertheless assume (<ref>).) The evolution of x^N(t) can be formally defined using Poisson representation. Let P_i→ (i+1),k(t), 0≤ i≤ B^(k)-1, k=1,…,K P_i→ (i-1),k(t), 1≤ i≤ B^(k), k=1,…,K denote independent Poisson processes with rate 1. P_i→ (i+1),k(t) corresponds to arrivals to queues of type k with length i, and P_i→ (i-1)(t) corresponds to jobs leaving queues of type k with length i. The Poisson representation of x^N(t) is x_i^(k),N(t)= 1/N P_(i-1)→ i,k(N ∫_0^t λ f^(k)_i-1(x^N(s))ṣ) -1/N P_i→ (i+1),k(N ∫_0^t λ f^(k)_i(x^N(s))ṣ) +1/N P_(i+1)→ i,k(N ∫_0^t μ^(k)_i+1 x_i+1^(k),N(s)ṣ) -1/N P_i→ (i-1),k(N ∫_0^t μ^(k)_i x_i^(k),N(s)ṣ), where f_i^(k)(x^N(t)) is the probability of a new arriving job to enter a queue with length i of type k. The {f_i^(k)(x^N(t)):0≤ i≤ B_k, k=1,…,K} functions are going to be collectively called the dispatch functions. The dispatch functions depend on the load-balancing principle, which will be addressed later. Formally, f_i^(k) are defined on the normalized state x^N(t), which are all contained in the domain {x:x∈ℝ^∑_k=1^K (B^(k)+1), x_j^(k)≥ 0, ∑_k=1^K ∑_j=0^B^(k)x_j^(k)=1}. The four possible changes in the number of queues of length i which appear in (<ref>) correspond to: * a job arriving to a queue of length i-1; * a job arriving to a queue of length i; * a job leaving a queue of length i+1; * a job leaving a queue of length i. On the border of the domain (<ref>), certain changes cannot occur. There is no service in empty queues: μ_0^(k)=0 (k=1,…,K), and no arrival to full queues: f^(k)_B^(k)(.)≡ 0 (k=1,…,K). We are interested in server clusters of various N sizes and especially the limit object as N→∞, that is, the mean-field limit (in accordance with Section <ref>). We first define the general mean-field equations corresponding to (<ref>): v^(k)_i(t)= v^(k)_i(0)+∫_0^t λ f^(k)_i-1(v(s))ṣ -∫_0^t λ f^(k)_i(v(s))ṣ +∫_0^t μ^(k)_i+1 v_i+1^(k)(s)ṣ -∫_0^t μ^(k)_i v_i^(k)(s)ṣ in integral form, or, equivalently, /ṭv_i^(k)(t)=λ f^(k)_i-1(v(t))-λ f_i^(k)(v(t)) +μ^(k)_i+1 v^(k)_i+1(t) - μ^(k)_i v^(k) _i(t) in differential form. An empty initial cluster corresponds to the initial condition v_i^(k)(0)= {[ γ_k for i=0,; 0 otherwise. ]. Theorem <ref> applies to this system whenever the f_i^(k) functions are Lipschitz-continuous. It turns out that the conditions of the general version of Theorem <ref> are mild enough so that transient mean-field convergence holds for all the discontinuous choices of f_i^(k) in the present paper, but this is not checked rigorously. For the stationary case, we denote the stationary distribution ν=(ν_i^(k)),i=0,…,B^(k), k=1,…,K (similar to the notation of Section <ref>). Theorem <ref> applies whenever f_i^(k) are Lipschitz-continuous. In the discontinuous setting, the most relevant question is whether the f_i^(k) functions are continuous at the unique fixed point ν or not. If ν lies inside a region where f^(k)_i are Lipschitz-continuous, then the conclusion of Theorem <ref> applies. However, when the f_i^(k) functions are discontinuous at ν, Theorem <ref> does not apply; in fact, little is known in this case rigorously. Based on this, it makes sense to distinguish the following two cases: * the functions f_i^(k) are Lipschitz-continuous at ν, or * the functions f_i^(k) are discontinuous at ν. When the functions f_i^(k) are Lipschitz-continuous at ν, the equations for the mean-field stationary distribution can be obtained from (<ref>) by setting /ṭv^(k)_i(t)=0: 0=λ f^(k)_i-1(v(t))-λ f^(k)_i(v(t)) +μ^(k)_i+1 v^(k)_i+1(t) - μ^(k)_i v^(k)_i(t) i∈{1,…,B^(k)-1} , k∈{ 1,…,K } which are equivalent to the dynamic balance equations μ^(k)_i ν^(k)_i =λ f^(k)_i-1(ν), i∈{1,…,B^(k)} , k∈{ 1,…,K } . We also have equations for the ratio of each server type: ∑_i=0^B^(k)ν^(k)_i=γ_k, k∈{1,…,K }. (<ref>) + (<ref>) provide algebraic equations for ν. We also propose another approach to obtain ν numerically, by solving the transient equations (<ref>) and taking the solution at a large enough point in time. (This assumes convergence to a single asymptotically stable solution, which we do not aim to prove rigorously.) When the f^(k)_i are discontinuous at ν, more considerations are needed to derive the dynamic balance equations. This will be addressed separately for each load balancing principle. Further remarks. The assumption that both arrival and service are Markovian means that the entire system is a Markov (population) process, which keeps the setup fairly simple. Interestingly, the same mean-field limit would be obtained for any arrival process as long as the arrivals average out in the mean-field limit; to be more precise, for any arrival process for which the Functional Strong Law of Large Numbers holds (see e.g. Theorem 3.2.1 in <cit.>). In case the monotonicity condition (<ref>) does not hold, mean-field convergence may fail. <cit.> contains specific examples where (<ref>) has multiple fixed points; stable fixed points correspond to quasi-stationary distributions of the population process for any finite N. The solution of (<ref>) will converge to one of the stable fixed points (depending on the initial condition). However, for any finite N, the population process will spend very long periods of time near one of the quasi-stationary points, switching between these points infinitely often. §.§ Mean system time A wide variety of parameters can be considered to describe the efficiency of such a system. A natural choice is the mean system time: the average time a job spends in the system between its arrival and service. We aim to calculate the mean system time H in the stationary mean-field regime. We note that the mean system time is a somewhat artificial object here since technically there are no individual jobs in the mean-field limit. It may be helpful to think of the mean-field limit as the case when N is extremely large. One way to compute H is via Little's Law H=L/λ_e, where L is the mean queue length in the system, and λ_e is the effective arrival rate (which excludes jobs not entering the system due to job loss). From the mean-field stationary distribution ν, L is easily computed, while λ_e depends on the load balancing policy, but is typically also straightforward to compute. Little's law can actually be applied to each server type separately for more detailed information; this is addressed in Appendix <ref>. Here we propose a different method to compute H, which gives even more detailed information, and will be useful later on. Let H_i,j^(k) denote the mean time until service for a job that is in position i in a queue of type k with j jobs total (so 1≤ i ≤ j ≤ B^(k), 1≤ k≤ K). In the case of constant service rates, H^(k)_i,j= i/μ^(k) holds. For non-constant service rate curves however, the service rate may change due to later arrivals, so we need to keep track of both the length of the queue and the position of the job within it. We will derive a system of linear equations using total expectation and the Markov property. For simplicity, we assume FIFO service principle in the following calculations, but due to Little's law, this assumption does not affect the value of H. The mindset is that we are following a tagged job at position i of a queue of type k with total queue length j, and the equations are based on possible changes in the queue, with the environment fixed due to the stationary mean-field regime. H_i,j^(k) = 1/λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)+ λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H_i,j+1^(k)+ μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H_i-1,j-1^(k) (2≤ i≤ j≤ B^(k)-1), H_i,B^(k)^(k) =1/μ_B^(k)^(k)+H_i-1,B^(k)-1^(k) (2≤ i≤ B^(k)), H_1,j^(k) =1/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)+λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H_1,j+1^(k) (1≤ j≤ B^(k)-1), H_1,B^(k)^(k) =1/μ_B^(k)^(k). (<ref>) makes use of the standard one step argument. We focus on a single queue of a given type k in the mean-field limit while assuming the environment to be stationary, and look for the next possible change in that queue. Jobs arrive to type k servers of queue length j with a rate of Nλ f_j^(k)(ν), and each job will be sent to one of Nν_j^(k) servers, so the arrival rate at a specific queue will be Nλ f_j^(k)(ν)/Nν_j^(k)=λf_j^(k)(ν)/ν_j^(k), while the service rate is μ_j^(k), so the rate of any change for a queue of length j is λf_j^(k)(ν)/ν_j^(k)+μ_j^(k). The change will either increase or decrease the length of the queue by 1, and we can apply total expectation. For full queues (j=B^(k)), arrival is not possible, that is, f_B^(k)^(k)(.)≡ 0 for k=1,…,K. In order to solve (<ref>), we first obtain the mean-field stationary distribution ν. ν can be calculated from either the balance equations (<ref>) when possible, or by numerically solving the transient mean-field equations (<ref>) and setting t large enough. Once ν is obtained, (<ref>) is just a system of linear equations for H_i,j^(k), which can actually be solved separately for each k for 1≤ k ≤ K. Once (<ref>) is solved, the mean system time H is just a linear combination of the values H_j,j^(k) according to the probabilities with which a job will be scheduled to a queue of length j-1 of a k-type server, that is, H=1/∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)H_j,j^(k). The normalizing factor in (<ref>) addresses job loss, as we only want to consider the mean system time of jobs which actually enter the system. Job loss probability is equal to 1-∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν). (<ref>) and (<ref>) are only valid if the dispatch functions f_i^(k) are continuous at ν. In other cases, we may need to tweak the formulas. We will provide the corresponding versions of (<ref>) and (<ref>) on a case-by-case basis whenever the functions f_i^(k) are discontinuous at ν. These versions will be heuristic in the sense that no formal rigorous proof will be provided, but the results nevertheless agree with the results from simulations. §.§ System time distribution In this section, we calculate the system time distribution for a random job. Here, the service principle is actually important; we will present the calculation for FIFO service principle here. The calculations need to be modified for LPS service principle; the corresponding equations are provided in Appendix <ref>. Let h_i,j^(k)(t) denote the probability density function of the remaining system time of a job at position i in a queue of length j and queue type k. Its Laplace-transform is defined as H̃_i,j^(k)(s)=∫_0^∞ h_i,j^(k)(t)e^-stdt. The following system of equations is the corresponding version of (<ref>) for the Laplace-transforms instead of the means. Total expectation also applies to Laplace-transforms, and we use the fact that the Laplace-transform of 0 is 1 and the Laplace-transform of λ e^-λ t is λ/s+λ to obtain H̃_i,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)( λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i,j+1^(k)(s)+ μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i-1,j-1^(k)(s)) (2≤ i≤ j≤ B^(k)), H̃_1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)( λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j+1^(k)(s)+ μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)) (1≤ j≤ B^(k)). The corresponding version of (<ref>) is H̃(s)= ∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)H̃_j,j^(k)(s). Once again, (<ref>) and (<ref>) are valid when the functions f_i^(k) are continuous at ν. In other cases, we may need to tweak the formulas on a case-by-case basis. The system time distribution can then be computed in the following manner: * We first compute the mean-field stationary distribution ν. This can be done either by solving the balance equations (<ref>), or by numerically solving the mean-field transient equations (<ref>), and setting a large enough t. * Once ν is available, (<ref>) is a system of linear equations for H̃_i,j^(k)(s) that is straightforward to solve. * Then H̃(s) is computed from (<ref>). * Finally, H̃(s) is transformed back to time domain. Due to (<ref>), H̃(s) is a rational function, whose inverse Laplace transform can be computed numerically. For numerical inverse Laplace transformation methods, we refer to <cit.>. We note that this approach to compute H̃(s), while explicit, has its limitations, as the formula for H̃(s) can get complicated for even moderately large K and B^(k) values. We address the feasibility further in Section <ref>. Job losses occur only upon arrival, that is, all jobs that actually enter the system will be served, so h_i,j^(k)(t) is a proper probability density function with ∫_0^∞ h_i,j^(k)(t) dt=1. However, if ∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)<1, then H̃(s) is the Laplace-transform of a nonnegative function whose integral is equal to 1-∑_k=1^K f^(k)_B^(k)(ν) where ∑_k=1^K f^(k)_B^(k)(ν) is the job loss probability, so in this sense, job losses are included in (<ref>). The corresponding normalized version of (<ref>) is 1/1-∑_k=1^K f^(k)_B^(k)(ν)∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)H̃_j,j^(k)(s), which is the Laplace-transform of a proper pdf whose integral is 1. Depending on the load balancing principle, job losses may or may not be possible in the mean-field limit. This will be addressed specifically for each load balancing principle (For a finite system, job losses are always possible due to the finite buffers and fluctuations in either the job arrival or service speed.) § LOAD BALANCING PRINCIPLES The load balancing principle describes the method the dispatcher uses to distribute the arriving jobs between the servers. It is quite important in large scale systems where the resources such as computing capacity are distributed between a large number of individual servers, and can make a big difference in the efficiency of the system. The general goal of load balancing is to avoid long queues, directing incoming jobs to shorter queues instead. There are several load balancing principles in use. Static policies do not consider the state of the system, only focusing on the incoming jobs. One example would be the round-robin load balancing policy, where incoming jobs are directed to the next server cyclically. Static load balancing principles are generally easy to operate, as they require minimal communication with the servers. Out of the principles observed in this paper, Random assignment falls into this category. Dynamic principles, which take into account the current state of the system, can be more efficient. In real clusters, there is a trade-off: complicated policies require more communication and computation, generating a higher overhead communication cost, but provide better balancing. That said, in the mathematical framework we present, the cost of communication overhead is not modeled. Including the cost of overhead communication to provide an analytical framework for more realistic models is subject to further research. In some systems it may be possible to reassign jobs that have been already assigned to new servers. It might also be possible that several servers “team up” to serve a single job. In our setting, we do not explore these options, and stick to a scenario where all jobs are assigned to a single server immediately upon arrival. On the other hand, in addition to the usual FIFO service principle, the framework does allow for limited processor sharing (LPS), where a single server can serve multiple jobs simultaneously. In this paper we will examine 5 load balancing principles: * Random assignment, where jobs are distributed randomly. With this principle, there is no actual load balancing. This principle will serve mostly as a baseline for comparison. * Join-Idle-Queue, where jobs are directed to idle queues if possible. A relatively recent idea <cit.>, further explored in <cit.>. * Join-Shortest-Queue, where jobs are directed to the server with the fewest number of jobs waiting in queue. One of the earliest load balancing policies that has been widely used for decades <cit.>. It provides very even balancing, but at the cost of high overhead communication, as the dispatcher needs to keep track of the queue length in every single server at all times. * Join-Shortest-Queue(d), where jobs are directed to the server with the fewest number of jobs waiting in queue from among d servers selected randomly. Also referred to as power-of-d, this is a version of JSQ that aims to reduce communication overhead at the cost of less strict balancing. It has been thoroughly explored, and has certain asymptotical optimality properties already for d=2 <cit.>. * Join-Below-Threshold, where jobs are directed to servers with a queue length below a prescribed threshold <cit.>. All of the above principles are based on natural intuitions that aim towards directing jobs to shorter queues, but they differ in the details and execution of doing so. In this section, we overview these load balancing principles from the literature. We present a high-level mathematical framework based on the Poisson representation of Section <ref> that is applicable to all of them, with the only difference being the f_i^(k)(.) functions. For each load balancing policy, we identify f_i^(k)(.), then write the mean-field equations corresponding to (<ref>). We also identify the mean-field stationary distribution ν whenever available explicitly. In case the f_i^(k)(.) functions are discontinuous at ν, we also rewrite the formulas (<ref>) and (<ref>) so that they can be used to compute the mean system time, and rewrite the formulas (<ref>) and (<ref>) for system time distribution. §.§ Random assignment This is the most simple principle that we observe, and it does not lead to any balancing. With this setup the queues basically operate, and thus can be analyzed independently of each other. For random assignment, f_i^(k)(x)=x^(k)_i, k∈{ 1,…,K} , and accordingly, the mean-field equation is v_i^(k)(t)= ∫_0^t λ v^(k)_i-1(s)ṣ -∫_0^t λ v^(k)_i(s)ṣ +∫_0^t μ_i+1 v^(k)_i+1(s)ṣ -∫_0^t μ_i v^(k)_i(s)ṣ. The mean-field balance equations, obtained from (<ref>), are μ_i^(k)ν_i^(k)=λν_i-1^(k) k∈{ 1,…,K} , i∈{ 1,…,B} . Solving (<ref>) gives the mean-field stationary distribution ν^(k)_i=c_k∏_j=1^iλ/μ^(k)_j, i∈{ 0,…,B^(k)} , with the c_k's coming from (<ref>). This is in accordance with the queues being independent. Since the rates f_i^(k) are continuous, (<ref>) and (<ref>) can be used to compute the mean system time H, and (<ref>) and (<ref>) can be used to compute the Laplace-transform of the pdf of the system time distribution. Job loss is possible for Random assignment, but is taken into account by the formulas (<ref>) and (<ref>). §.§ Join-Idle-Queue For Join-Idle-Queue (JIQ), incoming jobs are assigned to an idle server at random. If none of the servers are idle, a server is selected at random. For JIQ, using the notation y_0=∑_k=1^K x_0^(k), we have f^(k)_i(x)= {[ x_i^(k)/y_0 if i=0, y_0 >0,; 0 if i>0, y_0 >0,; x_i^(k) if y_0=0. ]. This system has been addressed in <cit.> for constant service rate curve and a homogeneous cluster. The structure of the mean-field stationary distribution ν depends on the relation between λ and ∑_k=1^K γ_k μ_1^(k). We address three cases separately. §.§.§ JIQ, subcritical case When λ<∑_k=1^K γ_k μ_1^(k), there will always be idle queues in the mean-field stationary limit, so all jobs will be directed to idle queues. ν is concentrated on queues of length 0 and 1. From (<ref>) we have μ_1^(k)ν_1^(k)=λν_0^(k)/∑_k=1^K ν_0^(k). We do not have an explicit solution to (<ref>), but it can be solved numerically, and numerical experiments suggest a single fixed point ν. In this region, the functions f_i are continuous, so (<ref>) and (<ref>) can be used to compute the mean system time H: H=∑_k=1^K ν_0^(k)/∑_k=1^K ν_0^(k) H_1,1^(k), and (<ref>) and (<ref>) can be used to compute the entire Laplace-transform of the system time distribution. For subcritical JIQ, in the mean-field limit, there will be no job loss. §.§.§ JIQ, critical case For λ=∑_k=1^K γ_k μ_1^(k), the mean-field stationary distribution is concentrated on queues of length 1, so we simply have ν_1^(k)= γ_k, k∈ (1,…,K). The functions f_i^(k) are discontinuous at ν, so (<ref>) and (<ref>) does not apply. Instead, in the dynamic balance, whenever a queue of length 1 finishes service, a new job will enter immediately. With this, we can write the equivalent of (<ref>) for JIQ: H_i,j^(k) = 1/μ_j^(k)+H_i-1,j-1^(k) (2≤ i≤ j≤ B^(k)), H_1,j^(k) =1/μ_j^(k) (1≤ j≤ B^(k)-1), As we can see it is basically equivalent with (<ref>) in this case, because the discontinuity would only affect the arrival rate, and it is multiplied by 0 for every relevant term. In the mean-field limit, all jobs go to queues of length 0 (which will then stay at length 1 for a positive amount of time), and there are no queues with 2 or more jobs. Accordingly, instead of (<ref>), we have H=∑_k=1^K μ_1^(k)ν_1^(k)/λ H_1,1^(k). For the Laplace transforms, we have H̃_i,j^(k)(s) = μ_j^(k)/s+μ_j^(k)H̃_i-1,j-1^(k)(s), (2≤ i≤ j≤ B^(k)), H̃_1,j^(k)(s) =μ_j^(k)/s+μ_j^(k) (1≤ j≤ B^(k)-1), and H̃(s)=∑_k=1^K μ_1^(k)ν_1^(k)/λH̃_1,1^(k)(s). For critical JIQ, in the mean-field limit, there will be no job loss. §.§.§ JIQ, supercritical case In case λ>∑_k=1^K γ_k μ_1^(k), there will be no idle queues, so ν_0^(k)=0 for k∈ (1,…,K). We note that f_i^(k) are discontinuous at any point with ∑_k=1^Kν_0^(k)=0 and ∑_k=1^Kν_1^(k)>0; an intuitive explanation of this discontinuity is the following. Whenever a server with a single job finishes service, it will become idle. In the mean-field limit, a job will enter the idle queue instantly, so once again, we do not observe idle queues for any positive amount of time. However, similar to the λ=∑_k=1^K γ_k μ_1^(k) case, a positive percentage of all incoming jobs will go to an idle queue. To compute this percentage, we once again observe that in the mean-field stationary distribution, service from queues of length 1 has to be balanced out completely by arrivals to idle queues. The total service rate in queues of type k of length 1 is μ_1^(k)ν_1^(k), which is thus completely balanced out by an equal amount of arrivals The remaining arrival rate (λ-∑_k=1^K μ_1^(k)ν_1^(k)) is distributed randomly. For longer queues, there are no discontinuities. Accordingly, the dynamic balance equations are (λ-∑_k=1^K μ_1^(k)ν_1^(k))ν_i^(k) = μ_i+1^(k)ν_i+1^(k), i∈(1,…,B^(k)-1). The system (<ref>) is nonlinear, but can be solved numerically. Then we can write a modified version of (<ref>) for the calculation of H^(k)_i,j. For this, we introduce z_0=∑_k=1^K μ_1^(k)ν_1^(k), dubbed the upkeep, which is the rate of service in servers with queue length 1, balanced out instantly by new arrivals. Essentially, the difference between (<ref>) and the original balance equations (<ref>) is the presence of this upkeep term in the case when the dispatch functions are discontinuous at the mean-field stationary distribution ν. According to JIQ policy, the remaining arrival rate λ-z_0 is distributed randomly for the rest of the system. Accordingly, (<ref>) becomes H_i,j^(k) = 1/(λ-z_0)+μ_j^(k)+ (λ-z_0)/(λ-z_0)+μ_j^(k)H_i,j+1^(k)+ μ_j^(k)/(λ-z_0)+μ_j^(k)H_i-1,j-1^(k) (2≤ i≤ j≤ B^(k)-1), H_i,B^(k)^(k) =1/μ_B^(k)^(k)+H_i-1,B^(k)-1^(k) (2≤ i≤ B^(k)), H_1,j^(k) =1/(λ-z_0)+μ_j^(k)+(λ-z_0)/(λ-z_0)+μ_j^(k)H_1,j+1^(k) (1≤ j≤ B^(k)-1), H_1,B^(k)^(k) =1/μ_B^(k)^(k). To obtain the mean system time H, instead of (<ref>), we now have H=∑_k=1^Kμ_1^(k)ν_1^(k)/λ H_1,1^(k)+ (1-∑_k=1^Kμ_1^(k)ν_1^(k)/λ)∑_k=1^K∑_j=2^B^(k)ν_j-1^(k)H_j,j^(k) since ∑_k=1^K μ_1^(k)ν_1^(k)/λ is the portion of the arrival rate that is used to balance out the service in queues of length 1 and the remaining portion of the incoming rate is distributed randomly. The corresponding equations for the Laplace transforms are H̃_i,j^(k)(s) = (λ-z_0)+μ_j^(k)/s+(λ-z_0)+μ_j^(k)( (λ-z_0)/(λ-z_0)+μ_j^(k)H̃_i,j+1^(k)(s)+ μ_j^(k)/(λ-z_0)+μ_j^(k)H̃_i-1,j-1^(k)(s)) (2≤ i≤ j≤ B^(k)-1), H̃_i,B^(k)^(k)(s) =μ_B^(k)^(k)/s+μ_B^(k)^(k)H̃_i-1,B^(k)-1^(k)(s) (2≤ i≤ B^(k)), H̃_1,j^(k)(s) =(λ-z_0)+μ_j^(k)/s+(λ-z_0)+μ_j^(k)((λ-z_0)/(λ-z_0)+μ_j^(k)H̃_1,j+1^(k)(s)+ μ_j^(k)/(λ-z_0)+μ_j^(k)) (1≤ j≤ B^(k)-1), H̃_1,B^(k)^(k)(s) =μ_B^(k)^(k)/s+μ_B^(k)^(k), and H̃(s)=∑_k=1^Kμ_1^(k)ν_1^(k)/λH̃_1,1^(k)(s)+ (1-z_0/λ)∑_k=1^K∑_j=2^B^(k)ν_j-1^(k)H̃_j,j^(k)(s). In general, for the supercritical JIQ case, job loss is possible, and is taken into account by the formula (<ref>). §.§ Join-Shortest-Queue For Join-Shortest-Queue (JSQ), incoming jobs are assigned to the shortest queue from among all queues; in case of multiple shortest queues of the same length, one is selected randomly. For JSQ, f^(k)_i(x)= {[ 0 if ∃ i'<i ∃ k': x_i'^(k')>0,; 0 if ∑_k=1^K x_i^(k)=0,; x_i^(k)/∑_k=1^K x_i^(k) otherwise. ]. For the stationary mean-field analysis, let i_0 denote the smallest i for which ∑_k=1^Kγ_kμ^(k)_i≥λ. Such an i exists if the stability condition (<ref>) holds. Then the mean-field stationary distribution ν will be concentrated on queues of length i_0 and i_0-1: starting from an arbitrary point, queues shorter than i_0-1 will receive the entire load of arrivals, which is larger than they can process, so these queues will “fill up” to level i_0-1, while queues longer than i_0 do not receive any load at all, so these queues will go down, until they reach level i_0. The upkeep term is very similar to the JIQ case. The total service rate in queues of length (i_0-1) is z_0=∑_k=1^Kμ^(k)_i_0-1ν^(k)_i_0-1, which is completely balanced out by an equal amount of arrivals. In case i_0=1, z_0=0, so there is no upkeep, and all queues are of length 0 or 1; in this case, JSQ is equivalent to either subcritical or critical JIQ. When i_0>1, there is an actual upkeep. We assume i_0>1 for the rest of this section. The remaining arrival rate (λ-z_0) goes to queues of length i_0-1, with the queue type k chosen at random with probabilities proportional to ν^(k)_i_0-1. For each server type k, these arrivals are balanced out by the service in queues of type k and length i_0, leading to the balance equations μ^(k)_i_0ν^(k)_i_0 =(λ-z_0) ν^(k)_i_0-1/∑_k=1^K ν^(k)_i_0-1 k∈ (1,…,K), which, along with (<ref>), give a (nonlinear) system of equations for ν, which can be solved numerically. Whenever a server with queue length i_0-1 finishes service, it will become the single shortest queue and receives a new arrival instantly. Rate (λ-z_0) remains for the rest of the system, which will be directed entirely to queues of length i_0-1. To ease notation, we also introduce y_0=∑_k=1^Kν^(k)_i_0-1. Then H_i,j^(k) = H_i,j+1^(k) (1≤ i≤ j<i_0-1), H_1,i_0-1^(k) = 1/((λ-z_0)/y_0) + μ_i_0-1^(k) + (λ-z_0)/y_0/((λ-z_0)/y_0) + μ_i_0-1^(k) H_1,i_0, H_i,i_0-1^(k) = 1/((λ-z_0)/y_0) + μ_i_0-1^(k) + (λ-z_0)/y_0/((λ-z_0)/y_0) + μ_i_0-1^(k) H_i,i_0 + μ_i_0-1^(k)/((λ-z_0)/y_0) + μ_i_0-1^(k) H_i-1,i_0-2 (2≤ i ≤ i_0-1), H_1,j^(k) =1/μ_j^(k) (i_0-1< j≤ B^(k)), H_i,j^(k) =1/μ_j^(k)+H_i-1,j-1 (i_0-1< j≤ B^(k), 1≤ i ≤ j). The first equation in (<ref>) addresses the fact that if a server has fewer than i_0-1 jobs in it, it will immediately fill up to i_0-1 jobs. We also adjust the effective arrival rate to λ -z_0, similarly to JIQ. If i_0=1, the f_i^(k) are continuous at ν, so we can use (<ref>) instead of (<ref>). If i_0=2, there will of course not be any equation with the condition (2≤ i ≤ i_0-1). If the functions f_i^(k) are continuous at ν, we can use (<ref>) to calculate the mean system time. In case i_0=1, ν is in the inside of a continuous domain of the functions f^(k)_i, so this is the case, and (<ref>) simplifies to H=∑_k=1^K ν_0^(k)/∑_k=1^K ν_0^(k) H^(k)_1,1. On the other hand, if i_0>1, the functions f_i are not continuous at ν, and (<ref>) is not applicable; instead, we have H = ∑_k=1^K μ^(k)_i_0-1ν^(k)_i_0-1/λ H^(k)_i_0-1,i_0-1 + (1-z_0/λ) ∑_k=1^K ν^(k)_i_0-1/∑_k=1^K ν^(k)_i_0-1H^(k)_i_0,i_0 . The corresponding equations for the Laplace transforms are H̃_i,j^(k)(s) = H̃_i,j+1^(k)(s) (1≤ i≤ j<i_0-1), H̃_1,i_0-1^(k)(s) = (λ-z_0)/y_0 + μ_i_0-1^(k)/s+(λ-z_0)/y_0+ μ_i_0-1^(k) * ( μ_i_0-1^(k)/(λ-z_0)/y_0 + μ_i_0-1^(k)+ (λ-z_0)/y_0/(λ-z_0)/y_0 + μ_i_0-1^(k)H̃_1,i_0(s)) H̃_i,i_0-1^(k)(s) = (λ-z_0)/y_0 + μ_i_0-1^(k)/s+(λ-z_0)/y_0 + μ_i_0-1^(k) * ((λ-z_0)/y_0/(λ-z_0)/y_0 + μ_i_0-1^(k)H̃_i,i_0(s) + μ_i_0-1^(k)/(λ-z_0)/y_0 + μ_i_0-1^(k)H̃_i-1,i_0-2(s)) (2≤ i ≤ i_0-1) H̃_1,j^(k)(s) =μ_j^(k)/s+μ_j^(k) (i_0-1< j≤ B^(k)), H̃_i,j^(k)(s) =μ_j^(k)/s+μ_j^(k)*H̃_i-1,j-1(s) (i_0-1< j≤ B^(k), 1≤ i ≤ j), and H̃(s) = ∑_k=1^K μ^(k)_i_0-1ν^(k)_i_0-1/λH̃^(k)_i_0-1,i_0-1(s) + (1-z_0/λ) ∑_k=1^K ν^(k)_i_0-1/∑_k=1^K ν^(k)_i_0-1H̃^(k)_i_0,i_0(s). Since y_0 and z_0 are straightforward to compute from ν, (<ref>) is still a linear system of equations for H̃_i,j^(k)(s), which is not any more difficult to solve than (<ref>). For JSQ, there is no job loss in the mean-field limit. (We emphasize that this is due to the stability condition (<ref>), which we assume in all cases.) §.§ Join-Shortest-Queue(d) JSQ(d) is a version of JSQ where the dispatcher first selects d servers randomly, and dispatches the incoming job to the shortest from among the d queues. If we set d=1, we get Random assignment, and if we set d=N, we get JSQ. The f_i^(k) functions are continuous for any finite d. Appendix <ref> addresses the case d→∞. For JSQ(d), we introduce the auxiliary variables y_i^(k),N=∑_j=i^B^(k)x_j^(k),N, z_i^N=∑_k=1^K y_i^(k),N, and then inclusion-exclusion shows f^(k),N_i(x^N)= x_i^(k),N/∑_k=1^K x_i^(k),N× [z_i^N(z_i^N-1/N)…(z_i^N-d-1/N) -z_i+1^N(z_i+1^N-1/N)…(z_i+1^N-d-1/N)]. The above version of f^N_i(.) is N-dependent, but converges to f_i^(k)(x)=x_i^(k)/∑_k=1^K x_i^(k)((z_i)^d-(z_i+1)^d). Due to the dependency on N, we refer to <cit.>, where this type of dependence on N is allowed. Also, both f_i^(k),N and f_i^(k) are continuous. Overall, the conclusions of Theorems <ref> and <ref> apply. The mean-field balance equations are λν_i^(k)/∑_k=1^K ν_i^(k)((∑_k=1^K∑_j=i^B^(k)ν_j^(k))^d-(∑_k=1^K∑_j=i+1^B^(k)ν_j^(k))^d) =μ_i^(k)ν_i^(k). Since the rates f_i^(k) are continuous, (<ref>) and (<ref>) can be used to compute the mean system time H, and (<ref>) and (<ref>) can be used to compute the Laplace-transform of the pdf of the system time distribution. Job loss is possible for JSQ(d), but will be typically small enough to be negligible in practice. §.§ Join-Below-Threshold Join-Below-Threshold (JBT) sets a threshold M_k which may depend on the server type k; servers of type k with queue length <M_k are considered available and servers of type k with queue length ≥ M_k are full. Tasks will be dispatched to a random available servers. If there are no available servers, jobs will be dispatched at random among all servers. JBT is commonly used in accordance with limited processor sharing (LPS) for servers which can serve multiple jobs simultaneously in an efficient manner. This is reflected in an increasing service rate curve μ_i^(k). If μ^(k)_i would start to decrease for large i, this is countered by setting the threshold M_k at the maximum point. M_k is referred to as the multi programming level (MPL), and is the number of jobs served simultaneously in a single server, while further jobs wait in queue. Overall, this setup ensures the service rate curve μ^(k)_i is increasing up to M_k and constant for M_k≤ i≤ B^(k). If we set the threshold to 1, we get the JIQ principle, and if we set it to B^(k), we get Random assignment. We introduce the auxiliary variable y= ∑_k=1^K∑_j=0^M_k-1x^(k)_j, which is the ratio of available servers. For JBT, f_i^(k)(x)= {[ 0 if y>0, i≥ M_k,; x^(k)_i/y if y>0, i<M_k,; x^(k)_i if y=0. ]. The mean-field balance equations are μ^(k)_i ν^(k)_i =λν_i-1^(k)/y, i∈{1,…,M_k-1} , k∈{ 1,…,K }, with ν_i^(k)=0 for i>M_k. For a full, detailed mean-field analysis of JBT, we refer to <cit.>. Apart from the stability condition (<ref>) and monotonicity condition (<ref>), it is usually also assumed that λ<∑_k=1^K γ_kμ_M_k, which is a stability condition stronger than (<ref>), ensuring that the evolution of the transient mean-field limit eventually enters and then never leaves the region where no queues are longer than the threshold. On this domain, the functions f_i^(k) are continuous, and the mean-field stationary solution ν is unique and also inside this domain. An efficient numerical method to compute ν is provided in <cit.>. As a side note, <cit.> also shows examples where (<ref>) does not hold, and there are multiple attractors in the mean-field system corresponding to quasi-stationary states of a system with a finite N, and mean-field convergence fails completely. If (<ref>) and (<ref>) hold, (<ref>) and (<ref>) can be used to compute the mean system time H, and (<ref>) and (<ref>) can be used to compute the Laplace-transform of the pdf of the system time distribution. Job loss is not possible for JBT. § NUMERICAL EXPERIMENTS We conducted several numerical experiments. These are by no means exhaustive, but should nevertheless display some interesting properties and allow for some numerical comparison of the various load balancing methods. For several parameter setups, we examined simulations for various choices of N, and also computed the mean-field limit (N=∞). Simulations were done in Python and symbolic computations were done in Wolfram Mathematica. The codes for both are available at <cit.>. For the symbolic calculations, numerical inverse Laplace transform was used, for which packages are available at <cit.>. Section <ref> displays transient mean-field convergence as N is increased. Also, as t is increased, each system will converge to its stationary state. Section <ref> compares the mean service times for both simulations and the mean-field settings. Section <ref> addresses service time distributions. §.§ Homogeneous transient mean-field diagrams In this section, we plot the solutions of the mean-field equations as well as the corresponding x_i^(k),N curves for systems with N=1000 and N=10000 servers, resulting from simulations. We will focus on homogeneous clusters with K=1 (also dropping (k) from the notation). B=B^(k), the maximal queue length will be set to 10. The rest of the parameter setup is shown in Table <ref>. The parameter setup adheres to the monotonicity assumption (<ref>) and also the stability condition (<ref>) (in fact, the system load can be computed as λ/μ_B in a homogeneous cluster). Figures <ref>–<ref> display simulation results for the transient evolution of the homogeneous system using various load balancing policies. For each load balancing policy, two plots are included: the number of servers is N=1000 for the plot on the left and N=10000 for the plot on the right. Other system parameters are according to Table <ref>. All systems are initially empty. The x axis is time, and the jagged line graphs show the ratio of servers with queue length 0 to 10 respectively. These have some natural fluctuations. Also included are the transient mean-field limits, which are smooth curves. §.§.§ Random Figure <ref> displays the transient evolution with Random load balancing policy. A significant ratio of queues is longer throughout; overall, the Random load balancing principle is rather inefficient, and serves mostly as a baseline. Later we will see the effect of more efficient load balancing principles on the same systems. The fluctuations of the simulations decrease as N is increased. Actually, as mentioned after Theorem <ref>, the fluctuations are guaranteed to be of order 1/√(N) for x^N (or, equivalently, order √(N) for X^N). However, the constant factor can be different for the various load balancing principles. For Random assignment, the fluctuations are relatively mild. Convergence to stationarity can also be observed: as time increases, the smooth graphs converge to the mean-field stationary distribution. That said, for any fixed finite N, the order of the fluctuations will not go to 0 as time is increased. §.§.§ JIQ Figure <ref> displays the transient evolution with JIQ load balancing policy for λ=0.95 and λ=1.25. Figures <ref> and <ref> have λ=0.95 (with other parameters according to Table <ref>), which is subcritical due to λ=0.95<μ_1=1 (see Section <ref>), so the system stabilizes on queues of length 0 and 1. Figures <ref> and <ref> have λ=1.25>μ_1=1, which is supercritical, so the system starts out by filling up all empty queues in a sharp manner. After this initial period, no empty queues are present anymore, and the dynamic dispatch is distributed among queues of length 1 through 10 randomly. Similar to Random policy, once again longer queues are present in the system. §.§.§ JSQ(2) and JSQ(5) Figure <ref> displays the transient evolution with JSQ(2) load balancing policy. Already for d=2, the result is markedly different from Random assignment. This is a known phenomenon, referred to as power-of-2 <cit.>. The ratio of longer queues diminishes more rapidly with the queue length than for either Random or JIQ policy. Figure <ref> displays the transient evolution with JSQ(5) load balancing policy. Here, most of the queues will be of length 3 and 4, with the ratio of either shorter or longer queues much smaller. We also note that the dispatch function is continuous, so the transient mean-field limit functions are smooth, although they change rather sharply. §.§.§ JSQ Figure <ref> displays the transient evolution with JSQ load balancing policy. Here, all of the queues will be of length 3 and 4 after the system fills up. At any point in time, there are only 2 different queue lengths present, starting from lengths 0 and 1, switching to 1 and 2, then 2 and 3, then 3 and 4 as the system fills up. We also note that the dispatch function is discontinuous, so the transient mean-field limit functions has breaking points at switches to new queue length pairs. The stationary mean-field limit is ν_3=ν_4=0.5 due to λ=1.25=μ_3+μ_4/2=1.2+1.3/2. For any finite N, when a job in a queue of minimal length finishes service, a shorter queue will appear for a brief but positive time. In the mean-field limit, such queues are filled back instantly. We also note that the fluctuations are considerably larger than for either Random or JIQ. An intuitive explanation is that the higher level of control provided by JSQ will generally focus any fluctuations in either the arrival or service on a single queue length: if the arrivals outweigh the service for a short period of time, the surplus arrivals will all go to servers of minimal queue length. Overall, the strict control introduces a positive correlation between the length of the queues, resulting in larger fluctuations (which are, once again, of order 1/√(N), but with a higher constant factor). Principles with less strict control generally distribute this fluctuation among several different queue lengths, resulting in smaller fluctuations. §.§.§ JBT Figure <ref> displays the transient evolution with JBT load balancing policy. The MPL parameter is set to 5. In this setup, the system reaches stability before hitting the MPL threshold (and accordingly, the mean-field system reaches its attractor before the discontinuity point, so the functions remain continuous). This is the intended usage of JBT. §.§ Heterogeneous transient mean-field diagrams In this section, we plot the solutions of the mean-field equations as well as the corresponding x_i^(k),N curves for systems with N=10000 servers, resulting from simulations. We will focus on heterogeneous clusters with K=2. B=B^(k), the maximal queue length will be set to 10. The rest of the parameter setup is shown in Table <ref>. The parameter setup adheres to the monotonicity assumption (<ref>) and also the stability condition (<ref>). The parameter choices in Table <ref> are motivated by an actual real-life scenario: in many shopping centers, there are two types of checkouts: checkouts served by an employee (service rate 1 in Table <ref>), with a separate queue for each such checkout, and self-service checkouts. A single self-service checkout is typically slightly slower (service rate 0.8 in Table <ref>) than a checkout served by an employee, but this is countered by the fact that there is a batch of self-service checkouts for each queue (the batch size is 5 for Table <ref>). Of course, in actual shopping centers, the number of queues may or may not be high enough to warrant a mean-field approach; that said, as we will see later, some derived performance measures are well-approximated by the mean-field limit already for smaller system sizes. Figures <ref>–<ref> display simulation results for the transient evolution of the heterogeneous system using various load balancing policies. For each load balancing policy, two plots are included: the ratio of type 1 servers with various queue lengths for the plot on the left and the ratio of type 2 servers with various queue lengths for the plot on the right. Other system parameters are according to Table <ref>. All systems are initially empty. The x axis is time, and the jagged line graphs show the ratio of servers with queue length 0 to 10 respectively. These have some natural fluctuations. Also included are the transient mean-field limits, which are smooth curves. §.§.§ Random Figure <ref> displays the transient evolution with Random load balancing policy. A significant ratio of queues is longer throughout; in fact, servers of type 1 are overloaded, as can be seen from the fact that the majority of queues of type 1 has length 10 (equal to the buffer size) or close. In a heterogeneous system, with poor load balancing, it is possible that some server types are overloaded even though the system as a whole is subcritical. §.§.§ JIQ Figure <ref> displays the transient evolution with JIQ load balancing policy. JIQ does not offer a considerable improvement over Random, as once again longer queues are present in the system. This also means that servers of type 1 are overloaded, which also results in significant data loss. On the other hand, servers of type 2 are subcritical. §.§.§ JSQ(2) and JSQ(5) Figure <ref> displays the transient evolution with JSQ(2) load balancing policy. Servers of type 1 are still overloaded, in which case JSQ(2) does not offer a considerable improvement over either Random or JIQ. The system (particularly servers of type 1) goes through an initial build-up period, starting from empty and converging to stationarity with the majority of queues full (length equal to buffer size 10) or close. Figure <ref> displays the transient evolution with JSQ(5) load balancing policy. In this case, the better load balancing results in both server types being subcritical; for server type 1, the typical queue lengths are 5 and 6, while for server type 2, the typical queue lengths are 4 and 5. Data loss is practically negligible in this case. §.§.§ JSQ Figure <ref> displays the transient evolution with JSQ load balancing policy. The build-up period is much sharper (in fact, the mean-field limit curves are nondifferentiable at the changes in minimal queue length), with both server types eventually reaching a state where all queue lengths are either 4 or 5. Fluctuations around the mean-field limit are relatively mild for N=10000 servers. §.§.§ JBT Figure <ref> displays the transient evolution with JBT load balancing policy. MPL parameters are 1 for server type 1 and 5 for server type 2. JBT load balancing policy suits the type of heterogeneous system described by Table <ref> particularly well: the MPL settings allow to fully utilize the service capacity of each server type without allowing queues longer than necessary. In fact, JBT can outperform JSQ for heterogeneous systems, as we will see in the next section. §.§ Mean system times The main performance measure we are going to examine is the mean system time, that is, the average time a job spends between arrival and finishing service. First we examine the homogeneous system described by the parameter settings in Table <ref> for simulations for various system sizes ranging from N=10 to N=10000 and also the mean-field limit, with the various load balancing principles from Section <ref>. Table <ref> lists the mean system times from both simulations, and calculated from the mean-field limit using equations (<ref>) and (<ref>) (or in the discontinuous cases, their corresponding versions listed in Section <ref>). We note that despite long running times, the simulation results still may have an inherent small random variation. JSQ is the most effective principle, which is unsurprising (although we do emphasize that in practice, JSQ comes with a heavy overhead communication burden which was not modelled here). JSQ(d) is more effective with a higher d, but already for d=2, it is significantly better than Random, which is once again known as the power-of-2 (or power-of-d) <cit.>. We note that jobs lost are not included in the averages in Table <ref>; in order to give a more complete picture, we mention that the theoretical job loss probability for Random policy (with the same parameters as per Table <ref>) is 0.0438, and for JIQ it is 0.0136 (for JSQ(2), JSQ(5), JSQ and JBT, job loss is negligible). Job loss probabilities for the simulations are not included in the paper, we just mention that they closely match the theoretical values. Overall, based on Table <ref>, the mean-field approximation for the mean system times is exceedingly accurate already for small values of N. Next we address the heterogeneous system described by the parameter settings in Table <ref>. As long as N is finite, there are fluctuations which do not vanish even as time increases and the systems converge to their stationary limit. As expected, fluctuations are bigger for smaller values of N. For smaller values of N, the mean system time is generally above the mean-field mean system time; an intuitive explanation for this is that the limited number of servers offers less `room' to balance out short periods of overflow (coming from the natural fluctuations of arrivals and service), causing the system to operate with longer queues for said short periods. Once again, in order to compare the mean system time for the various load balancing principles, it is important to take into account that some of these principles operate with significant data loss: for random, the theoretical job loss probability is 0.285, for JIQ, it is 0.251, and for JSQ(2), it is 0.104. Table <ref> shows that, similar to the homogeneous case (Table <ref>), the mean-field approximation for the mean system times is very accurate for both smaller and larger choices of N (and for JSQ(5), JSQ and JBT, job loss is negligible). The only exception is JBT for N=12; for very small system sizes and system load close to critical (1.6/1.75 according to the parameters in Table <ref>), even a small burst in the arrivals can push the entire system over the threshold, at which point it switches to Random, and stays there for significant periods of time. §.§ System time distributions In this section we examine the theoretical probability density function of the system time in the mean-field limit for some setups and compare it with empirical distributions (histograms) from simulations for finite N. The theoretical distributions are calculated using equations (<ref>) and (<ref>) (or in discontinuous cases their counterparts described in Section <ref>), and inverse Laplace transformation (ILT). The system (<ref>) can be solved explicitly, and the solution is a rational function (in the Laplace transform domain). However, depending on the value of K and B^(1),…,B^(K), the solution for H̃(s) from (<ref>) can be infeasible already for moderately large values of K and B. In general, the formula for H̃(s) is relatively simple if only few of the H̃_i,j^(k)'s are nonzero, which is typically the case for JSQ. For other load balancing principles, where all H̃_i,j^(k)'s are nonzero, the explicit formula for H̃(s) from (<ref>) is infeasible already for K=2 and B^(1)=B^(2)=10. Due to this, the parameters for this setup were the homogeneous system from Table <ref> with λ=1.25. We also set B=5, to make the ILT less complicated. Just as an example, for JSQ, with the above parameters, we have H̃(s)=(24 s+65)^4/5 (2 s+5)^3 (10 s+13)^4. H̃(s) can be computed for the other load balancing principles as well, but the explicit formulas are far more complicated, and are omitted from the paper. Figure <ref> displays the theoretical pdf of the system time in the mean-field limit with a red curve, while the blue histograms are from simulations with N=1000 servers. Each system was run long enough to reach the stationary regime, and only jobs arriving during this period were considered. The theoretical pdf's are normalized as per (<ref>). In general, all histograms match the theoretical pdf's well. For random assignment and JIQ (which is supercritical with the given parameters), the system time is less concentrated (e.g. it has a higher variance). JSQ is the only one where the system time density is 0 at time t=0; for all other load balancing principles, it is possible that a job starts service immediately, which corresponds to a positive density at t=0. For JSQ(2) and JSQ(5), the match between the theoretical and numerical distributions is slightly less perfect than for others (although still very good); the exact reason for this is subject to further research. § CONCLUSION AND OUTLOOK In this paper we examined the mean-field transient and stationary convergence of systems with several different load-balancing principles based on queue length. While no rigorous proof was presented, the simulations suggest that mean-field convergence holds even for discontinuous f_i^(k) dispatch functions. We have provided formulas to compute the stationary mean-field limit, and also the mean system time in the mean-field stationary regime. In addition to that, the entire service time distribution could also be calculated with the help of the Laplace transform, adapting (<ref>) and (<ref>) for the Laplace transforms of the system times. We have also examined the mean system time numerically for several parameter setups. There is a lot of possibility for further work in this topic. One direction would be to provide mathematically rigorous proofs for versions of Theorems <ref> and <ref> for some of the discussed systems with discontinuous dispatch functions. Another direction is scenarios where further information is available (e.g. job size); in such cases, that information can be used to estimate the load of each queue more precisely and design other load balancing principles. Yet another direction is to add a geometrical dimension to the server cluster, with the load balancing principle taking into account the distance of the arriving job to the queues (e.g. as in a shopping center, where customers are more likely to choose a queue physically closer to their arrival point). We could also make the model more realistic, even if more complicated, by considering the dispatcher's communication overhead cost. However, we expect the communication overhead cost to be highly dependent on actual system settings, and as such, it seems difficult to incorporate it in a high level model in a general manner. Another direction is to allow different job types, with certain job types can be served more efficiently by certain server types. All in all, this is a vast topic that has a lot of potential for further development. abbrv § LITTLE'S LAW In a heterogeneous system, Little's law applies to the entire system in the mean-field stationary regime, and also applies to each server type separately. It is valid regardless if the dispatch functions are continuous or not, but requires some consideration for discontinuous dispatch functions. In this section, we provide the proper formulas for each load balancing principle. Let λ^(k) denote the effective arrival rate to servers of type k, and L^(k) denote the average queue length in servers of type k (k=1,…, K). Using these, we can compute the mean system time for a job in a server of type k via Little's law as H^(k)=L^(k)/λ^(k). For any load balancing principle, L^(k)=∑_i=0^B^(k) iν_i^(k)/∑_i=0^B^(k)ν_i^(k). The formula for λ^(k) is different for continuous and discontinuous dispatch functions. For dispatch functions continuous at ν (this case includes Random, JSQ(d), JBT and also subcritical JIQ and JSQ with i_0=1), the formula for λ^(k) is λ^(k)=λ∑_i=0^B^(k)-1f_i^(k)(ν)/∑_i=0^B^(k)ν_i^(k). For supercritical JIQ, we have λ^(k)=μ_1^(k)ν_1^(k)+(λ-z_0) ∑_i=1^B^(k)-1ν_i^(k)/∑_i=0^B^(k)ν_i^(k), and for JSQ with i_0>1, we have λ^(k)=μ_i_0-1^(k)ν_i_0-1^(k)+(λ-z_0) ν_i_0-1^(k)/∑_k=1^Kν_i_0-1^(k)/ν_i_0-1^(k)+ν_i_0^(k). § SYSTEM TIME DISTRIBUTION FOR LPS SERVICE PRINCIPLE This section is a counterpart of Section <ref>; we provide formulas to compute the system time distribution for limited processor sharing (LPS) service principle. For LPS, each server type has a parameter called the multi-programming level (MPL); the server can serve a number of jobs up to the MPL simultaneously, dividing its service capacity evenly, while further jobs wait in a FIFO queue. Once again, let h_i,j^(k)(t) denote the probability density function of the remaining system time of a job at position i in a queue of length j and queue type k. M^(k) denotes the multi-programming level of queues of type k. The order of jobs is irrelevant among jobs already in service; that is, for fixed k and j, h_i,j^(k)(t) is constant for i≤min(j,M^(k)). Accordingly, in the formulas we will write h_1,j^(k)(t) instead of h_i,j^(k)(t) for i≤min(j,M^(k)). For jobs that are not yet in service (i> M^(k)), their position within the queue is still relevant. For LPS, when the tagged job is in service, three type of changes can occur to its queue: arrival, or the tagged job finishes service, or another job finishes service. In the last case, it does not matter whether the finished job is ahead or behind the tagged job. When the tagged job is not yet in service, only two type of changes can occur: arrival, or another job finishes service. We also use once again that arrival is not possible when the queue is full (j=B^(k)), that is, f_B^(k)^(k)(.)≡ 0 for k=1,…,K. The corresponding version of (<ref>) is as follows: H̃_1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)( λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j+1^(k)(s)+ μ_j^(k)(M^(k)-1)/M^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j-1^(k)(s)+ μ_j^(k)/M^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)) (1≤ i≤ M^(k)≤ j≤ B^(k)), H̃_1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)( λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j+1^(k)(s)+ μ_j^(k)(j-1)/j/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j-1^(k)(s)+ μ_j^(k)/j/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)) (1≤ i ≤ j< M^(k)), H̃_M^(k)+1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)( λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_M^(k)+1,j+1^(k)(s)+ μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j-1^(k)(s)) ( j≤ B^(k)), H̃_i,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)( λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i,j+1^(k)(s)+ μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i-1,j-1^(k)(s)) (M^(k)+1<i≤ j≤ B^(k)). Once again, (<ref>) and (<ref>) are applicable to compute H̃(s) when the dispatch functions f_i^(k) are continuous at ν. In other cases, the formulas may need to be modified. § PARTIAL CONTROL We highlight a situation dubbed partial control. In such a system, some of the jobs are not subject to the load balancing policy, and will simply be dispatched randomly. A real life example for partial control would be directing traffic via cooperating navigation apps in cars: each car with a cooperating navigation app is subject to load balancing, but drivers without the app select routes not subject to the same load balancing. Assume we have a system with a load balancing policy corresponding to some dispatch functions f_i^(k)(x). Load balancing only has partial control: for each job, with some fixed probability 0<p≤ 1, the job will be dispatched according to the load balancing policy, but with probability (1-p), it will be dispatched randomly. In this case, the corresponding dispatch functions are simply f̂_i^(k)(x) = p f_i^(k)(x) + (1-p)x_i^(k). Figure <ref> shows transient plots with JSQ load balancing principle with low (p=0.3) and high (p=0.8) levels of control. System parameters are according to Table <ref> with λ = 1.25 and N=10000. With a low level of control, the transient behaviour is closer to the case of random assignment, with longer queues also present. For low control, the minimal stationary queue length is 2, lower than the minimal stationary queue length 3 in case of full control JSQ, as the system needs to balance fewer controlled jobs (e.g. the upkeep is lower). For high control (p=0.8), the minimal stationary queue length remains 3, but once again, longer queues are also present. § CONVERGENCE OF JSQ(D) TO JSQ AS D→∞ This section shows an interesting visualisation of JSQ(d)'s “convergence” to JSQ as d→∞. Figure <ref> displays the solutions of the transient mean-field equations for various choices of d. In practice, JSQ(d) is quite close to JSQ already for moderately large values of d. We note that the mean-field transient solutions are smooth for JSQ(d) for any choice of d, but not for JSQ.
http://arxiv.org/abs/2307.07438v1
20230714160346
The Shimura lift and congruences for modular forms with the eta multiplier
[ "Scott Ahlgren", "Nickolas Andersen", "Robert Dicks" ]
math.NT
[ "math.NT" ]
(ℤℚℝℍ̋ℕℂℙ𝔽𝔻𝕂Cl^+𝒬ℱ𝒩𝔢 SL PSL GL MpεtheoremTheorem[section] lemma[theorem]Lemmacorollary[theorem]Corollaryproposition[theorem]Propositionconjecture[theorem]ConjecturehypothesisHypothesis*speculationSpeculationremark*remarkRemark*remarksRemarks*exampleExample*problemProblem*notationNotation*definitionDefinitionequationsectionshowonlyrefsThe Shimura lift for modular forms with the eta multiplier]The Shimura lift and congruences for modular forms with the eta multiplierDepartment of Mathematics University of Illinois Urbana, IL [email protected] of Mathematics, Brigham Young University, Provo, UT [email protected] of Mathematics University of Illinois Urbana, IL [email protected] first author was supported by a grant from the Simons Foundation (#963004 to Scott Ahlgren). The second author was supported by a grant from the Simons Foundation (#854098 to Nickolas Andersen) The Shimura correspondence is a fundamental tool in the study of half-integral weight modular forms. In this paper, we prove a Shimura-type correspondence for spaces of half-integral weight cusp forms which transform with a power of the Dedekind eta multiplier twisted by a Dirichlet character. We prove that the lift of a cusp form of weight λ+1/2 and level N has weight 2λ and level 6N, and is new at the primes 2 and 3 with specified Atkin-Lehner eigenvalues. This precise information leads to arithmetic applications. For a wide family of spaces of half-integral weight modular forms we prove the existence of infinitely many primes ℓ which give rise to quadratic congruences modulo arbitrary powers of ℓ. [ Robert Dicks August 12, 2023 =================== § INTRODUCTION AND STATEMENT OF RESULTS The Shimura correspondence<cit.> is a family of maps taking modular forms of half-integral weight to modular forms of integral weight and preserving the action of the Hecke algebras. Since its introduction it has been a ubiquitous tool in the study of half-integral weight modular forms. Works of Waldspurger <cit.><cit.> and Kohnen and Zagier <cit.> establish a connection between the coefficients of half-integral weight forms and the L-functions of their Shimura lifts. Shimura's construction relies on Weil's converse theorem. Niwa <cit.> gave a more direct construction of the Shimura lift by integrating a given half-integral weight form against a suitable theta kernel. This work was refined by Cipra <cit.>, who in particular extended the results to all positive half-integral weights. These works concern modular forms whose multiplier is a power of ν_θ twisted by a Dirichlet character, where ν_θ is the multiplier on Γ_0(4) attached to the usual theta function. If f is such a form on Γ_0(4N), then the Shimura lift of f is on Γ_0(2N). Here we will consider modular forms of half-integral weight transforming with a power of the Dedekind eta multipler ν twisted by a Dirichlet character (see Section <ref> for precise definitions). In the simplest case, suppose that (r,6)=1, and that f is a cusp form of weight λ+1/2 on _2() with multiplier ν^r. If V_m denotes the map z↦ mz, then f V_24 is a half-integral weight form in the sense of Shimura on Γ_0(576) (see Lemma <ref> for details). For a positive squarefree integer t, we can apply the usual Shimura lift _t to f(24z), which gives a form of weight 2λ on Γ_0(288). Yang <cit.> showed that in fact we have _t(f V_24) ∈ S^_2λ6, - 8r,-12r⊗12∙, i.e. there exists a cusp form g of weight 2λ on Γ_0(6) (in the new subspace) with Atkin-Lehner eigenvalues - 8r and -12r at 2 and 3, respectively, such that _t(f V_24) = g ⊗12∙. Thus _t(f V_24) is a cusp form of level 144 (a similar result holds for the Shimura lift of f V_8 when (r,6)=3). Given Yang's result, it is natural to suspect that there exists a modification of the Shimura lift which maps f directly into S_2λ^new(6,- 8r, -12r). Here we construct such a lift in a much more general setting. In particular, if ψ is a Dirichlet character modulo N, we construct a family of lifts which map forms of half-integral weight with multiplier ψν^r on Γ_0(N) to forms of integral weight and character ψ^2 on Γ_0(6N) and which provide precise information at the primes 2 and 3. The statement of our results requires some notation. Let λ and N be positive integers and let r be an odd integer. Let ψ be a Dirichlet character modulo N. Denote by S_λ+1/2(N,ψν^r) the space of cusp forms of weight λ+1/2 on Γ_0(N) transforming with multiplier system ψν^r (by (<ref>) these spaces are trivial unless ψ(-1) = -1r(-1)^λ). When ψ is trivial we omit it from the notation. In weight 3/2, we need to avoid unary theta series, so when (r,6)=1 we define S_3/2^c(N,ψν^r) as the subspace of S_3/2(N,ψν^r) comprising forms f which satisfy ⟨ f V_24, g ⟩ = 0 for all theta functions g∈ S_3/2(576N, ψ-1∙^r+1/212∙ν_θ^3), where ⟨·, ·⟩ is the usual Petersson inner product. We make a similar definition for S_3/2^c(N,ψν^r) when (r,6)=3 (see Lemma <ref>). If (N,6)=1 then we denote by S_2λ^ 2, 3(6N,ψ^2, _2, _3) the space of cusp forms of weight 2λ on Γ_0(6N) with character ψ^2 which are new at 2 and 3, and with Atkin-Lehner eigenvalues _2 and _3 at those primes. We make a similar definition for S_2λ^ 2(2N,ψ^2, _2). For p∈{2, 3} define _p, r, ψ:=-ψ(p)4pr. Here, and throughout, ∙∙ denotes the extended quadratic symbol. For primes p, we denote the Hecke operators on S_λ+1/2(N,ψν^r) and S_2λ^ 2, 3(6N,ψ^2, _2, r, ψ, _3, r,ψ) by T_p^2 and T_p, respectively (see Section <ref> for details). Finally, let L(s,χ) denote the Dirichlet L-function. We can now state the main results, which are slightly different in the cases (r, 6)=1 and (r, 6)=3. We note that versions of each theorem can be given without the hypothesis on N; see Theorems <ref> and <ref> for details. Let r be an integer with (r,6)=1 and let t be a squarefree positive integer. Suppose that λ, N ∈^+, that (N, 6)=1, and that ψ is a Dirichlet character modulo N. Suppose that F(z) = ∑_n≡ r24 a(n) q^n/24∈ S_λ+1/2(N,ψν^r) and if λ=1 suppose further that F∈ S_3/2^c(N,ψν^r). Define coefficients b(n) by the relation ∑_n=1^∞b(n)/n^s = Ls-λ+1,ψ∙ t∑_n=1^∞12na(tn^2)/n^s. Then we have _t(F) := ∑_n=1^∞ b(n)q^n ∈ S_2λ^ 2, 3(6N,ψ^2, _2, r, ψ, _3, r,ψ). Furthermore we have _t(T_p^2F)=12pT_p _t(F) for each prime p≥ 5. In this case _t(F)=0 unless t ≡ r 24. A similar result holds when (r,6)=3; here it is most convenient to write the Fourier expansions in powers of q^1/8. Let r be an integer with (r,6)=3 and let t be a squarefree positive integer. Suppose that λ, N ∈^+, that N is odd, and that ψ is a Dirichlet character modulo N. Suppose that F(z) = ∑_n≡r/3 8 a(n) q^n/8∈ S_λ+1/2(N,ψν^r) and if λ=1 suppose further that F∈ S_3/2^c(N,ψν^r). Define coefficients b(n) by the relation ∑_n=1^∞b(n)/n^s = Ls-λ+1,ψ∙ t∑_n=1^∞-4na(tn^2)/n^s. Then _t(F) := ∑_n=1^∞ b(n)q^n ∈ S_2λ^ 2(2N,ψ^2, _2, r, ψ). Furthermore, we have _t(T_p^2F)=-4pT_p_t(F) for each prime p≥ 3. In this case we have _t(F)=0 unless t ≡ r/3 8. The precise relationship between _t and the standard Shimura lift _t in both cases is described precisely in Section <ref>. As an application of these theorems we prove that quadratic congruences of a particular type hold for modular forms with the eta-multiplier in a wide range of spaces. These congruences are motivated by some old examples of Atkin for the partition function <cit.> which are described in (<ref>) below. The fact that the lifts are new at 2 is crucial to the arithmetic techniques which we employ, which generalize recent results of the first author with Allen and Tang <cit.>. For the application we assume that ψ is quadratic and that ℓ≥ 5 is prime. Our first result on congruences relies on the assumption that the pair (2λ,ℓ) is suitable for the triple (N,ψ,r). This is a technical hypothesis on the mod ℓ reductions ρ̅_f of the ℓ-adic Galois representations ρ_f attached to newforms f. However, we will see in Section <ref> that (2λ,ℓ) is suitable for any triple (N,ψ,r) if ℓ > 10λ-4 and 2^2λ-1≢2^± 1ℓ. Suppose that ℓ≥ 5 is prime and that r is an odd integer. Suppose that N is a squarefree, odd, positive integer with ℓ∤ N, and 3 ∤ N if 3 ∤ r. Suppose that ψ is a quadratic character modulo N, and let k be a positive even integer. If (r,6)=1, then we say that the pair (k,ℓ) is suitable for the triple (N,ψ,r) if for every normalized Hecke eigenform f ∈ S^ 2,3_k(6N,_2,r,ψ,_3,r,ψ), the image of ρ̅_f contains a conjugate of _2(𝔽_ℓ). If (r,6)=3, then we make a similar definition for normalized Hecke eigenforms in S^ 2_k(2N,_2,r,ψ). To make the statement of the next two results uniform, in the case when 3| r we choose to express the Fourier expansion (<ref>) by a change of variables in the form (<ref>). Let S_λ+1/2(N,ψν^r)_ℓ⊆ S_λ+1/2(N,ψν^r) denote the subset of forms whose coefficients are algebraic numbers which are integral at all primes above ℓ. Suppose that ℓ≥ 5 is prime and that r is odd. Suppose that m and λ are positive integers. Let N be a squarefree, odd positive integer such that ℓ∤ N, and 3 ∤ N if 3 ∤ r. Let ψ be a quadratic character modulo N. Suppose that F(z) =∑_n ≡ r 24 a(n)q^n/24∈ S_λ+1/2(N, ψν^r)_ℓ with (2λ,ℓ) suitable for (N,ψ,r), and if λ=1, suppose further that F ∈ S^c_3/2(N,ψν^r). Then there is a positive density set S of primes such that if p ∈ S, then p ≡ 1 ℓ^m and a(p^2n) ≡ 0 ℓ^m if n/p=-1/p^r-1/2ψ(p) if 3 ∤ r, -3/p-1/p^r-1/2ψ(p) if 3| r. In the above theorem and those which follow, our definition of density is that of natural density. It would also be possible to prove an analogue of <cit.> by modifying the proof of Theorem <ref> below. See the remark at the end of Section 7. Our next result on congruences does not rely on the hypothesis of suitability. Suppose that ℓ≥ 5 is prime and that r is odd. Let m and λ be positive integers. Let N be a squarefree, odd positive integer such that ℓ∤ N, and 3 ∤ N if 3 ∤ r. Let ψ be a quadratic character modulo N. Suppose that there exists a ∈ with the property that 2^a≡ -2 ℓ . Let F(z) =∑_n ≡ r 24 a(n)q^n/24∈ S_λ+1/2(N, ψν^r)_ℓ, and if λ=1, suppose further that F ∈ S^c_3/2(N,ψν^r). Then there is a positive density set S of primes such that if p ∈ S then p ≡ -2 ℓ^m and for some _p∈{± 1} we have a(p^2n) ≡ 0 ℓ^m if n/p=_p. The value of _p can be explicitly calculated using Theorem <ref> below. By a result of Hasse <cit.>, the proportion of primes satisfying (<ref>) is 17/24. As an example of an application of our main results, we consider congruences for colored generalized Frobenius partitions, which were introduced by Andrews <cit.>, and have been studied by many authors. For a positive integer m, let cϕ_m(n) be the number of generalized Frobenius partitions of n with m colors. By <cit.> we have ∑ cϕ_mn+m24q^n/24=η^-m(z)∑_n=0^∞ r_m(n)q^n, where r_m(n) is the number of representations of n by the quadratic form ∑_i=1^m-1x_i^2+∑_1≤ i<j≤ m-1x_ix_j. In particular, cϕ_1(n) agrees with the ordinary partition function p(n). Congruence properties of cϕ_m have been studied by many people; see for example <cit.>. Since the generating function ∑ r_m(n)q^n is a holomorphic modular form of weight (m-1)/2 and level m or 2m, it follows from the results of Treneer <cit.> that if ℓ≥ 5 is a prime with ℓ∤ m and j is a positive integer, then there are infinitely many Q giving rise to congruences of the form cϕ_mℓ^kQ^3 n+m24≡ 0ℓ^j if (n, ℓ Q)=1 where k is sufficiently large (see <cit.>, <cit.> for details). For the partition function, Atkin <cit.> found a number of examples of congruences of the form p ℓ Q^2 n+124≡ 0 ℓ if nQ=_Q, where ℓ and Q are distinct primes, _Q∈{±1}, and 5≤ℓ≤ 31. In recent work of the first author with Allen and Tang <cit.> it is shown that for every prime ℓ≥ 5, a positive proportion of primes Q give rise to a congruence of the form (<ref>). Our main theorems open the door to proving congruences like Atkin's for the functions cϕ_m. Since the constructions are somewhat involved, we will develop this application fully in a forthcoming paper. In Section <ref> we give an extended example which illustrates the use of these theorems in the case when m=5. The simplest examples of the congruences which we obtain are cϕ_513 ·97^2 n+524 ≡ 0 13 if n97=-1, cϕ_513 ·103^2 n+524 ≡ 0 13 if n103=1. Note that selecting n in appropriate residue classes gives rise to many congruences of the form cϕ_5(ℓ Q^3 n+β)≡ 013; for example, choosing n≡ 19924· 97 in the first example above produces the congruence cϕ_5(13·97^3 n+1014212)≡ 013. In the last section we give many other such examples; these can easily be checked numerically since cϕ_5 can be expressed in terms of the partition function using <cit.>. We close the Introduction with a brief outline of the paper and a sketch of our methods. Section <ref> contains background results on the various sorts of modular forms which we consider. To prove Theorem <ref> in the case r=t=1, we begin by constructing in Section <ref> a two-variable theta function ϑ(z,w) from a lattice L of rank 3 and a ternary quadratic form Q (this construction follows the outline of Niwa and Cipra). The function ϑ(z,w) transforms with weight λ+ 1/2 on Γ_0(N) in the z-variable and with weight 2λ on Γ_0(6N) in the w-variable. We prove directly that ϑ(z,w) behaves nicely with respect to the Atkin-Lehner involutions W_2 and W_3 and the Fricke involution. After checking analytic behavior, we see that the function Φ(w)=∫_Γ_0(N)\ v^λ+1/2F(z) ϑ(z,w) dudv/v^2 gives the lift _1(F). A lengthy but reasonably straightforward calculation in Section <ref> gives the Fourier expansion of Φ(w). We use various operators to prove the remaining assertions in Theorem <ref> and to deduce the theorem for the remaining values of r and t. In particular, the Hecke equivariance of the lift at primes ≥ 5 follows from the Fourier expansion, while the behavior at the primes 2 and 3 is inherited from that of ϑ(z,w). The proof of Theorem <ref> parallels that of Theorem <ref>; in Section <ref> we describe the construction of the theta kernel and give a sketch of the remainder of the proof since the details are similar. The arguments in Section <ref> used to prove Theorems <ref> and <ref> are Galois-theoretic. We begin by showing that the condition of suitability is satisfied for most spaces (in particular, justifying the assertion (<ref>)). We require modifications of the arguments of <cit.>; the main technical results are Theorems <ref> (which relies on suitability) and <ref> (which does not); these give large sets of primes for which the Hecke operators act diagonally with prescribed eigenvalues on newforms in the relevant spaces modulo arbitrary powers of a given prime ℓ≥ 5. Filtered through the maps _t, these eigenvalues give the congruences described in the theorems. Finally, in Section <ref> we give an extended example which produces the congruences for cϕ_5 described above. § EXAMPLES §.§ Example 1 The space S_4(6)=S_4^ 2, 3(6, +1, +1) is one-dimensional, spanned by the newform f(z)=η^2(z)η^2(2z)η^2(3z)η^2(6z)=∑ a(n)q^n= q-2 q^2-3 q^3+4 q^4+6 q^5+6 q^6+⋯. Let F(z)=η^5(z)=q^5/24-5 q^29/24+5 q^53/24+10 q^77/24-15 q^101/24+⋯∈ S_5/2(1, ν^5). It follows from Theorem <ref> that each lift _t(F) is a constant multiple of f, and that for p≥ 5 we have FT_p^2=12pa(p)F. We remark that if G(z)=η^3(2z)η^2(3z)η^2(12z)/η^2(6z)=q-3 q^3-2 q^4+6 q^6+6 q^7-3 q^9+⋯∈ S_5/2(12, ν_θ^5), then each lift _t(G) is also a constant multiple of f. §.§ Example 2 Consider the modular form f(z)∈ S_2(14)=S_2^ 2(14, +1) defined by f(z)=η(z)η(2z)η(7z)η(14z)=∑ a(n)q^n=q - q^2 - 2 q^3 + q^4 + 2 q^6 + q^7+⋯ . Define (see Corollary <ref> to compute the multipliers) F_1(z)=η(7z)η^2(z)∈ S_3/27, ∙ 7ν^9, F_2(z)=η(7z)^2η(z)∈ S_3/27, ν^15. Then each lift _t(F_i) is a constant multiple of f, and for p≥ 3 we have F_iT_p^2=-4pa(p)F_i. §.§ Example 3 Let f∈ S_2^ 2(26, -1) be <cit.>; we have f=∑ a(n)q^n=q + q^2 - 3q^3 + q^4 - q^5 - 3q^6 + q^7+⋯. Then the conclusions of Example 2 hold with F_1(z)=η(13z)η^2(z)+137η(13z)^3∈ S_3/213, ∙13ν^15, F_2(z)=7η(13z)^2η(z)+η^3(z)∈ S_3/213, ν^3. §.§ Example 4 Finally, let f∈ S_2^ 2, 3(66, -1, -1) be <cit.>; we have f=∑ a(n)q^n=q + q^2 + q^3 + q^4 - 4q^5 + q^6-2q^7+⋯. We find that the two forms F_1(z)=η(11z)η^2(z)∈ S_3/211, ∙11ν^13, F_2(z)=η(11z)^2η(z)∈ S_3/211, ν^23 each lift to f and satisfy the relationship (<ref>). We note in each of the last three examples that the modular forms F_1 and F_2 are (up to a constant multiple) interchanged by the Fricke involution. § BACKGROUND AND PRELIMINARIES If f is a function on the upper half-plane $̋,k∈1/2, andγ= abcd∈_2^+(), we define f_kγ(z)=(γ)^k/2(cz+d)^-kf(γ z). and f^*_kγ(z)=(γ)^k/2(̅c̅z̅+̅d̅)̅^̅-̅k̅f(γ z). IfN∈andωis a multiplier onΓ_0(N), then we denote byℳ_k(N, ω)the-vector space of functions on$̋ which satisfy the transformation law f_kγ=ω(γ)f for all γ∈Γ_0(N). We denote by M_k(N, ω) and S_k(N, ω) the subspaces of holomorphic and cuspidal modular forms, respectively. If f is a function on $̋ andm∈we definef V_m(z)=f(mz); in other words f V_m=m^-k/2 f_k m001. We also define f U_m=m^k/2-1∑_v m f_k 1v0m. Iffis holomorphic and has period1, so thatf(z)=∑ a(n)q^n, then we have f U_m=∑ a(mn) q^n. §.§ Integral weight modular forms Suppose thatk∈, thatN∈, and thatχis a Dirichlet character moduloN. Define H_N = 0-1N0. The Fricke involutionf↦ f_kH_Ntakesℳ_k(N, χ)toℳ_k(N, χ̅). For primesp, the mapf↦ fV_ptakesℳ_k(N, χ)toℳ_k(Np, χ). Ifp∤ Nthen the mapf↦ fU_ptakesℳ_k(N, χ)toℳ_k(Np, χ), while ifp| Nthen the map preservesℳ_k(N, χ). For each primepwe have the Hecke operatorT_p: S_k(N, χ)→ S_k(N, χ)defined by T_p=U_p+χ(p)p^k-1V_p. Ifp| Nand(p, N/p)=1, then the Atkin-Lehner matrixW^N_pis any integral matrix with W^N_p =pαδNβp , (W^N_p)=p. Ifχis defined moduloN/pthen the operator_kW^N_ppreserves the spaceℳ_k(N, χ), and on this space is independent of the particular choice of matrix <cit.>. Note that we may takeδ=1in (<ref>). Since scalar matrices act trivially under_k, it will sometimes be convenient to work with the scaled matrices H_N= 0-1/√(N)√(N)0, W^N_p=√(p) αδ/√(p)Nβ/√(p)√(p). All of the above statements remain true withℳ_kreplaced byM_korS_k. Ifp| Nandχis defined moduloN/p, then we denote byS_k^ p(N, χ)the orthogonal complement (with respect to the Peterson inner product) of the subspace generated byS_kN/p, χandS_kN/p, χV_p. By <cit.> and an argument as in the proof of <cit.> we have f∈ S_k^ p(N, χ) ^N_N/pf=^N_N/pf_kH_N=0, where ^N_N/p: S_kN, χ→ S_kN/p, χ is the trace map defined by ^N_N/pf=∑ f_kR_i where theR_i∈Γ_1(N/p)are right coset representatives ofΓ_0(N)inΓ_0(N/p). We record a standard lemma for convenience. Suppose that p| N is a prime with (p, N/p)=1, that χ is a Dirichlet character modulo N/p, and that f∈ S_k(N, χ). Then ^N_N/pf = f+χ̅(p)p^1-k/2f _k W_p^NU_p, ^N_N/pf_k H_N = f_kH_N+χ(p)p^1-k/2f_kH_N_kW_p^NU_p. The second assertion follows from the first. To prove the first, write W_p^N=pα1Nβp. The identity matrix together with the matrices S_j:=1pW_p^N 1j0p=α1+jαNβ/pjNβ/p+p, 0≤ j< p are a set of right coset representatives for Γ_0(N) in Γ_0(N/p). Choose a matrix γ= a b c d∈Γ_0(N) with a≡ pN/p. Then the identity matrix together with the matrices γ S_j, 0≤ j<p form a set of representatives R_i of the required form. The lemma follows from (<ref>). A corollary follows directly from the lemma and (<ref>). Suppose that p| N is a prime with (p, N/p)=1, that χ is a Dirichlet character modulo N/p, and that f∈ S_k^ p(N, χ, _p) (where _p denotes the W_N/p^N eigenvalue). Then we have f U_p=-_pχ(p)p^k/2-1 f. In particular if f=∑ a(n)q^n is a newform then a(p)=-_pχ(p)p^k/2-1. §.§ Modular forms for the theta-multiplier The standard theta function is given by θ(z) = ∑_n=-∞^∞ q^n^2. This is a modular form of weight1/2onΓ_0(4)with multiplierν_θdefined by θ_1/2γ=ν_θ(γ)θ, γ∈Γ_0(4). Ifγ= abcd∈Γ_0(4)then we have ν_θ(γ)= cd_d^-1, where _d = 1 if d≡ 1 4, i if d ≡ 3 4. For odd values ofd,d_1,d_2we recall the formulas e1-d8 = 2d_d and _d_1d_2 = _d_1_d_2(-1)^d_1-1/2d_2-1/2. The spaces of modular forms of half integral weight in the sense of Shimura <cit.> are M_λ+1/2(N, ψν_θ^2λ+1), where4| Nandψis a Dirichlet character moduloN. On these spaces there are Hecke operatorsT_p^2^ sfor primesp. ForF=∑ a(n)q^n∈ M_λ+1/2(N, ψν_θ^2λ+1)we have the explicit description T_p^2^ s F=∑a(p^2n)+-1p^λ np ψ(p)p^λ-1a(n)+ψ^2(p)p^2λ-1a np^2q^n. We recall the definition of the standard Shimura lift. Suppose thatF=∑ a(n)q^n∈ S_λ+1/2(N, ψν_θ^2λ+1), whereλ≥ 1; ifλ=1suppose further thatFis in the orthogonal complement of the space spanned by single variable theta series. Iftis a positive squarefree integer, Shimura's lift is given by_t(F)=∑ c(n)q^n, where the coefficientsc(n)are given by ∑_n=1^∞c(n)/n^s=Ls-λ+1, ψ-1∙^λ t∙∑_n=1^∞a(tn^2)/n^s. After the work of Shimura, Niwa, and Cipra <cit.> we have_t(F)∈ S_2λN/2, ψ^2. Moreover, the lift is equivariant with respect to the Hecke operatorsT_pandT_p^2^ s. §.§ Modular forms for the eta-multiplier The Dedekind eta function is given by η(z)=q^1/24∏_n=1^∞(1-q^n). This is a modular form of weight1/2on_2(), and the eta-multiplierνis defined by η_1/2γ=ν(γ)η, γ∈_2(). Note that we haveν(γ)^24=1for allγ. We have the explicit formulas <cit.> forc>0: ν(γ) = dc e124(a+d)c-bd(c^2-1)-3c, if c is odd, cd e124(a+d)c-bd(c^2-1)+3d-3-3cd, if c is even, ν(-γ)=iν(γ). Letψbe a Dirichlet character defined moduloN, andr∈. We will be concerned with the spacesM_λ+1/2N, ψν^rwhereλ∈_≥0. Whenψis trivial we omit it from the notation. We recall <cit.> that M_λ+1/2N, ψν^r={0} unless ψ(-1)≡ r-2λ4. In particular these spaces are trivial unlessris odd. This condition may be written in the form M_λ+1/2N, ψν^r={0} unless ψ(-1)=-1r(-1)^λ. EachF∈ M_λ+1/2N, ψν^rhas a Fourier expansion of the form F(z)=∑_n≡ r24 a(n)q^n/24; if(r, 6)=3it will typically be more convenient to represent this expansion in the form F(z)=∑_n≡r/38 a(n)q^n/8. The next lemma describes a connection between these two multipliers. If (r, 6)=1 then F ∈ℳ_λ+1/2N, ψν^r F V_24∈ℳ_λ+1/2576N, ψ-1∙^λ+r-1/212∙ν_θ^2λ+1. If (r, 6)=3 then F ∈ℳ_λ+1/2N, ψν^r F V_8∈ℳ_λ+1/264N, ψ-1∙^λ+r-1/2ν_θ^2λ+1. Suppose that (r, 6)=1. After a computation using (<ref>), (<ref>) and (<ref>), we find that F ∈ℳ_λ+1/2N, ψν^r F V_24∈ℳ_λ+1/2576N, ψ12∙ν_θ^r. From (<ref>) we see that ν_θ^r=ν_θ^2λ+1ν_θ^-1r(-1)^λ-1=ν_θ^2λ+1-1∙^-1r(-1)^λ-1/2. For all λ we have -1r(-1)^λ-12≡λ+r-12 2, from which ν_θ^r=ν_θ^2λ+1-1∙^λ+r-1/2. The lemma follows in the case (r, 6)=1. If (r, 6)=3 then a computation analogous to that which establishes (<ref>) shows that F ∈ℳ_λ+1/2N, ψν^r F V_8∈ℳ_λ+1/264N, ψν_θ^r. The lemma follows from the relationship (<ref>). Hecke operators can be defined onM_λ+1/2N, ψν^rusing the definition (<ref>) on spaces with the theta multiplier together with (<ref>) and (<ref>) (see e.g. <cit.> for the caseN=1). Suppose that(r, 6)=1, thatp≥ 5is prime, and that F(z) = ∑_n≡ r24 a(n) q^n/24∈ M_λ+1/2(N,ψν^r). Then the action ofT_p^2is given by T_p^2F=∑_n≡ r(24)a(p^2n)+-1p^r-1/212npψ(p)p^λ-1a(n)+ψ^2(p)p^2λ-1a np^2q^n/24. When(r, 6)=3,p≥ 3is prime, and F(z) = ∑_n≡r/38 a(n) q^n/8∈ M_λ+1/2(N,ψν^r), then the action is given by T_p^2F=∑_n≡r/38a(p^2n)+-1p^r-1/2npψ(p)p^λ-1a(n)+ψ^2(p)p^2λ-1a np^2q^n/8. These operators preserve the spaces of cusp forms. If (r, 6)=3, then the definitions (<ref>) and (<ref>) disagree only when p=3. Suppose that r and t are odd, and that 3∤ t if 3∤ r. If abcd∈Γ_0(t) then ν^r(atbc/td) = dt ν^rt(abcd). By the last identity in (<ref>) we may assume that c>0. The lemma can then be checked using the explicit formulas in (<ref>). In the case when c is even the computation relies on the identity td e((1-t)(d-1)/8) = dt, which is one form of quadratic reciprocity. Suppose that r and t are odd, and that 3∤ t if 3∤ r. If F∈ℳ_λ+1/2(N,ψν^r) then F V_t ∈ℳ_λ+1/2(Nt,ψ∙ t ν^rt). Finally we record some technical results which will be important later. As in <cit.> we define an involution which acts on functions on$̋ for λ∈: ( F𝒲_N, λ+1/2)(z)=N^-λ/2-1/4(-iz)^-λ-1/2F(-1/Nz). If γ= abcd∈Γ_0(N) define γ'= d-c/N-bNa=H_Nγ H_N^-1, and define _γ∈{± 1} as follows: _γ=1 if and only if any of the following is satisfied: * a≥ 0. * a<0 and bc<0. * b=0, c≥ 0, and a=d=-1. * c=0, b≤ 0, and a=d=-1. If γ∈Γ_0(N) and F is a function on $̋ then F𝒲_N, λ+1/2_λ+1/2γ=_γ F_λ+1/2γ'𝒲_N, λ+1/2. For the proof the following facts are useful: zw^1/2=z^1/2w^1/2, (-z)^1/2=-iz^1/2 for z, w∈$̋. Computing each side and using the first fact, we see that it suffices to prove that az+bcz+d^1/2(cz+d)^1/2=_γ z^1/2az+bz^1/2. This is established with a straightforward but tedious calculation. For example, ifa<0,b>0andc<0, we find that the left side of (<ref>) is-i(-az-b)^1/2, and that z^1/2az+bz^1/2=-i z^1/2-az-bz^1/2=-i(-az-b)^1/2, from which_γ=1. The other cases are similar and we omit the details. Letν_Nbe the multiplier onΓ_0(N)associated toη(Nz). If ψ is a Dirichlet character modulo N and F∈ℳ_λ+1/2N, ψ̅ν_N then F𝒲_N, λ+1/2∈ℳ_λ+1/2N, ψν. Applying Lemma <ref> with F=η and using the fact that η𝒲_N, 1/2=N^1/4η(Nz) shows that for γ∈Γ_0(N) we have ν_N(γ)=_γν(γ'). Since γ”=γ it follows that ν_N(γ')=_γ'ν(γ). Applying the lemma with F∈ℳ_λ+1/2N, ν_Nψ̅ shows that for γ∈Γ_0(N) we have F𝒲_N, λ+1/2_γ+1/2γ=_γν_N(γ')ψ̅(γ')F𝒲_N, λ+1/2. It can be checked from the definition that _γ=_γ', and the result follows from these facts. §.§ Comparison of the lifts We prove a proposition describing the relationship between the lifts_tand_t. Suppose that (r, 6)=1, that t is a squarefree positive integer, and that F=∑ a(n)q^n ∈ S_λ+1/2(N,ψν^r) and _t(F)=∑ b(n)q^n are as in (<ref>) and (<ref>). Let _t(F V_24)=∑ c(n)q^n be the usual Shimura lift defined in (<ref>). Then for all n we have c(n)= 12nb(n). Suppose that (r, 6)=3, that t is a squarefree positive integer, and that F=∑ a(n)q^n ∈ S_λ+1/2(N,ψν^r) and _t(F)=∑ b(n)q^n are as in (<ref>) and (<ref>). Let _t(F V_8)=∑ c(n)q^n. Then for all n we have c(n)= -4nb(n). If (r, 6)=1 we may assume that t≡ r24 (otherwise both lifts are zero). Using Lemma <ref> and quadratic reciprocity we see that the coefficients of _t(F V_24) are given by ∑c(n)/n^s=Ls-λ+1, ψ12∙∙ t∑a(tn^2)/n^s. The claim follows from comparing this with the definition of b(n) (recall that a(n)=0 if (n, 6)≠ 1). If (r, 6)=3 and t≡ r/3 8, then the coefficients of _t(F V_8) are given by ∑c(n)/n^s=Ls-λ+1,ψ-4∙^1-r/2 t∙∑a(tn^2)/n^s. Note that -4∙^1-r/2 t∙=-4∙∙ t. The proposition follows in the same way. § CONSTRUCTION AND PROPERTIES OF THETA KERNELS In this section we modify the theta kernels of Niwa <cit.> and Cipra <cit.> to construct a theta functionϑ(z,w),z,w∈$̋, with ϑ(·,w) ∈ℳ_λ+1/2(N,ψν) and ϑ(z,·) ∈ℳ_2λ(6N,ψ^2), and such that ϑ(z,·) is an eigenform of the operators U_p, W_p^6N, and H_6N. Here N and λ are positive integers and ψ is a Dirichlet character modulo N with ψ(-1)=(-1)^λ (see (<ref>)). In the next section we will use ϑ(z,w) to define the Shimura lift and prove Theorem <ref> in the case r=t=1. The remaining cases are deduced by a separate argument. Let L be a lattice in ^n of rank n and let Q be an n× n symmetric matrix with rational entries and signature (p,q), with p+q=n. Define the bilinear form ⟨·, ·⟩:^n×^n → by ⟨ x,y ⟩ = x^T Q y. For σ= abcd∈_2() the Weil representation σ↦ r(σ) defined in 1 of <cit.> (see also 1 of <cit.>) acts on Schwartz functions f:^n→ via [r(σ)f](x) = |a|^n/2 e( 12 ab⟨ x,x ⟩) f(ax) if c=0, | Q|^-1/2|c|^-n/2∫_^n e(a⟨ x,x ⟩ - 2⟨ x,y ⟩ + d ⟨ y,y ⟩/2c) f(y) dy if c≠ 0. For μ∈, a function f:^n→ is said to have the weight μ spherical property if r(κ(ϕ))f = (κ(ϕ))^p-q e^iμϕf where κ(ϕ) = cosϕsinϕ-sinϕcosϕ and (σ) = i^ c/2 if c≠ 0, i^1- d/2 if c=0. Let L^∗ = {x∈^n : ⟨ x,y ⟩∈ for all y∈ L} denote the dual lattice. If h∈ L^∗ and f is a Schwartz function with the weight μ spherical property, define θ(z,f,h) = v^-μ/2∑_x∈ L [r(σ_z) f](x+h), where z=u+iv and σ_z∈_2() maps to z under the map σ↦σ i, that is, σ_z = √(v)u/√(v)01/√(v). Note that by (<ref>) we have θ(z,f,h) = v^n/4-μ/2∑_x∈ L e( 12 u⟨ x+h,x+h ⟩) f(√(v)(x+h)). Theorem 1.5 of <cit.> (see also Corollary 0 of <cit.>) gives the following transformation law for θ(z,f,h). Let γ= abcd∈_2(). If h∈ L^*/L and f satisfies the weight μ spherical property (<ref>) then θ(γ z,f,h) = i^q-p/2( c) (cz+d)^μ∑_k∈ L^∗/L c(h,k)_γθ(z,f,k), where c(h,k)_γ = δ_k,ahe(ab/2⟨ h,k ⟩) if c=0, and otherwise c(h,k)_γ = | Q|^-1/2(vol L)^-1|c|^-n/2∑_r∈ L/cL ea⟨ h+r,h+r ⟩ - 2⟨ k, h+r ⟩ + d⟨ k,k ⟩2c. As in Theorem 1.9 of <cit.>, we can construct functions satisfying the spherical property by taking combinations of products of Hermite polynomials. For each integer μ≥ 0 let H_μ(x) denote the Hermite polynomial H_μ(x) = (-1)^μexp( 12x^2) d^μ/dx^μexp(- 12x^2). Then H_0(x) = 1, H_1(x) = x, H_2(x) = x^2-1, etc. These coincide with the Hermite polynomials _μ(x) in <cit.>. By <cit.> we have the generating function ∑_μ=0^∞H_μ(x)/μ!z^μ = e^xz-1/2z^2. The theta kernel ϑ(z,w) which we use in Section <ref> to define the Shimura lift is constructed by starting with a lattice of rank 3 with associated bilinear form ⟨ x,y ⟩ that splits as ⟨ x,y ⟩ = ⟨ x_2,y_2 ⟩_1 + ⟨ (x_1,y_1),(x_3,y_3) ⟩_2, where ⟨·,·⟩_1 and ⟨·,·⟩_2 are bilinear forms on and ^2, respectively. A key property of ϑ(z,w) is that ϑ(z,iy) splits into a linear combination of products of the form ϑ_1,μ(z)ϑ_2,λ-μ(z,y) where ϑ_1,μ and ϑ_2,λ-μ are theta series associated to ⟨·,·⟩_1 and ⟨·,·⟩_2. In the next two subsections, we construct these theta functions, using notation consistent with the splitting (<ref>). §.§ A theta series of rank 1 We first construct a family of theta series that transforms with multiplier system ν_N by following Example 1 of <cit.>. The first element of this family will be ϑ_1,0(z) = 2η(Nz), which obviously satisfies the desired property. The other elements differ from this distinguished element only by a choice of Schwartz function f. Define L_1 = 12N , L_1' = N , L_1^∗ = . Then L_1^∗ is dual to L_1 for the bilinear form ⟨ x,y ⟩ = xy/12N associated to Q=1/12N. For h∈ L_1' let h_2=h/N∈. We will use the Schwartz function f_1,μ(x_2) = H_μ(√(π3N) x_2) e^-π12N x_2^2. By Theorem 1.9 of <cit.> the function f_1,μ has the spherical property (<ref>) for weight μ+1/2. Thus it makes sense to form the theta series ϑ_1,μ(z) = ∑_h∈ L_1'/L_1χ_12(h_2) θ(z,f_1,μ,h), where χ_12 = 12∙. By (<ref>) we have ϑ_1,μ(z) = v^-μ/2∑_x_2∈χ_12(x_2) H_μ(√(13π N v) x_2) e(124Nx_2^2 z). In the special case μ=0 we have ϑ_1,0(z) = ∑_x_2∈χ_12(x_2) eNx_2^2 z24 = 2η(Nz). We use this to obtain a formula for the coefficients c(h,k)_γ in the transformation law. Let h∈ L_1'/L_1 and γ∈Γ_0(N). For k∈ L_1^∗/L_1, let c_1(h,k)_γ:=c(h,k)_γ be as in Proposition <ref>. * If k∉ L_1'/L_1 then c_1(h,k)_γ = 0. * If k=Nk_2∈ L_1'/L_1 then i^-1/2( c)∑_h∈ L_1'/L_1χ_12(h_2) c_1(h,k)_γ = χ_12(k_2) ν_N(γ). If c=0 then (1) follows immediately from the formula given in Proposition <ref>. If c≠ 0, replace r by r+12c in (<ref>). Then, since N|(c,r,h) we find that c_1(h,k)_γ = e-kNc_1(h,k)_γ. Thus c_1(h,k)_γ=0 unless N| k, that is, k∈ L_1'/L_1. By Proposition <ref>, (<ref>), and (<ref>) we have ν_N(γ) ∑_h∈ L_1'/L_1χ_12(h_2)θ(z,f_1,0,h) = i^-1/2( c)∑_k∈ L_1'/L_1θ(z,f_1,0,k) ∑_h∈ L_1'/L_1χ_12(h_2) c_1(h,k)_γ. For h∈ L_1'/L_1, the Fourier expansion of θ(z,f_1,0,h) is θ(z,f_1,0,h) = ∑_ℓ≡ h_2(12) eNℓ^2z24, so θ(z,f_1,0,h) and θ(z,f_1,0,h') are linearly independent unless h_2≡± h_2'12. Equation (<ref>) now follows from (<ref>) and the fact that c_1(-h,-k)_γ = c_1(h,k)_γ. The previous lemma gives a transformation law for ϑ_1,μ(z). For μ≥ 0 and for γ= abcd∈Γ_0(N) we have ϑ_1,μ(γ z) = ν_N(γ)(cz+d)^μ+1/2ϑ_1,μ(z). By Proposition <ref> we have ϑ_1,μ(γ z) = i^-1/2( c) (cz+d)^μ+1/2∑_k∈ L_1'/L_1θ(z,f_1,μ,k) ∑_h∈ L_1^*/L_1χ_12(h_2) c_1(h,k)_γ. Lemma <ref> yields the desired result. Since ν_N(-I)=-i we see that ϑ_1,μ = 0 whenever μ is odd. We can also determine the behavior of these theta functions under z↦ -1/Nz. For every μ≥ 0 we have ϑ_1,μ(-1/Nz) = i^-1/2z^μ+1/2ϑ_1,μ(z/N). Write f=f_1,μ and h=Nh_2 for h∈ L_1'/L_1. Then Proposition <ref> gives i^1/2ϑ_1,μ(-1/z) = z^μ+1/2 (12N)^-1/2∑_h∈ L_1'/L_1χ_12(h_2) ∑_k∈ L_1^∗/L_1 e-h_2k12θ(z,f,k) = z^μ+1/2 (12N)^-1/2∑_k(12N)θ(z,f,k) ∑_h_2(12)χ_12(h_2) e-h_2k12. The inner Gauss sum evaluates to χ_12(k)√(12). Thus i^1/2ϑ_1,μ(-1/z) = z^μ+1/2 N^-1/2∑_k(12N)χ_12(k) θ(z,f,k). Replacing z by Nz we obtain i^1/2θ_1,μ(-1/Nz) = (Nv)^-μ/2 (Nz)^μ+1/2 N^-1/2∑_k(12N)χ_12(k) ∑_x≡ k(12N) eux^224 f(√(Nv) x). Writing k≡ k_0 12, with k_0∈/12 the latter equation becomes i^1/2θ_1,μ(-1/Nz) = (Nv)^-μ/2 (Nz)^μ+1/2 N^-1/2∑_k_0(12)χ_12(k_0) ∑_x≡ k_0(12) eux^224 f(√(Nv) x) = N^-μ-1/2 (Nz)^μ+1/2∑_k_0(12)χ_12(k_0) θ(z/N,f,Nk_0) = z^μ+1/2∑_h∈ L_1'/L_1χ_12(h_2) θ(z/N,f,k). Equation (<ref>) follows. §.§ A theta series of rank 2 Next we construct a family of theta series that transform with (integral) weight λ-μ, where 0≤μ≤λ. We will eventually combine these with the theta series from the previous subsection to construct the two-variable theta kernel which will be used in the Shimura lift. Let L_2 = N⊕ 6N, L_2' = ⊕ 6N, L_2^∗ = ⊕ 6. Then L_2^∗ is dual to L_2 with respect to the bilinear form associated to Q = 1/6N-1-1. Let ψ be a Dirichlet character modulo N and for x=(x_1,6Nx_3)∈ L_2' define ψ(x)=ψ(x_1). Suppose that f has the weight μ spherical property. Then for γ= abcd∈Γ_0(N) we have ∑_h∈ L_2'/L_2ψ(h)θ(γ z,f,h) = ψ(d)(cz+d)^μ∑_h∈ L_2'/L_2ψ(h)θ(z,f,h). Let h∈ L_2'/L_2. By Proposition <ref> we have θ(γ z,f,h) = (cz+d)^μ∑_k∈ L_2^∗/L_2 c(h,k)_γθ(z,f,k). If c=0 then c(h,k)_γ=δ_k,ah because k=ah implies that k=(*,0). If c≠ 0 then c(h,k)_γ = N^-1|c|^-1∑_r∈ L_2/cL_2 ea⟨ h+r,h+r ⟩ - 2⟨ k,h+r ⟩ + d⟨ k,k ⟩2c. We write h=(h_1,0), k=(k_1,6k_3), and r=(Nr_1,6Nr_3). Then c(h,k)_γ = N^-1|c|^-1 eh_1k_3-dk_1k_3Nc∑_r_1(c)∑_r_3(c) e-ah_1r_3-aNr_1r_3+k_1r_3+k_3r_1c. The latter expression equals zero unless k_1 = ah_1 and k_3=0 k=ah. Writing c=Nc', we obtain c(h,k)_γ = δ_k,ahN^-1 |c|^-1∑_r_1, r_3(c) e-ar_1r_3c' = δ_k,ah. Thus, for all γ∈Γ_0(N) we have c(h,k)_γ = δ_k,ah. It follows that ∑_h∈ L_2'/L_2ψ(h)θ(γ z,f,h) = (cz+d)^μ∑_h∈ L_2'/L_2ψ(h) θ(z,f,ah) = ψ(d) (cz+d)^μ∑_h∈ L_2'/L_2ψ(h) θ(z,f,h). This completes the proof. We specialize to f=f_2,μ,y where f_2,μ,y(x) = H_μ( √(%s/%s)π3N (y^-1x_1-yx_3) ) exp-π(y^-2x_1^2+y^2x_3^2)6N, and we define ϑ_2,μ(z,y) = v^-μ/2∑_h∈ L_2'/L_2ψ̅(h_1) ∑_x∈ L [r(σ_z) f_2,μ,y](x+h). By Theorem 1.9 of <cit.> the function f_2,μ,1 has the spherical property in weight μ. It will be useful in the next section to have the following expression for ϑ_2,μ(z,y). For any μ≥ 0 we have ϑ_2,μ(z,y) = i^μ v^-μ/√(6N) y^-1-μπ3N^μ/2∑_x_1,x_3∈ψ̅(x_1) (x_1z̅+x_3)^μexp( -π/6Nvy^2|x_1 z+x_3|^2 ). By (<ref>) we have ϑ_2,μ(z,y) = v^1-μ/2∑_x_1,x_3∈ψ̅(x_1) e(-ux_1x_3) f_2,μ,y(√(v) x_1,6N√(v) x_3), where we have written x∈ L_2' as x=(x_1,6Nx_3). Performing Poisson summation on x_3 we find that ϑ_2,μ(z,y) = v^1-μ/2∑_x_1,x_3∈ψ̅(x_1) g(x_3), where g(x_3) = ∫_-∞^∞ f_2,μ,y(√(v) x_1, 6N√(v) t) e^-2π i(ux_1+x_3)t dt. Using (<ref>) and making the change of variable s=√(π v/3N)(y^-1x_1-6Nyt) we find that g(x_3) = 1/2y√(3π N v) e(-x_1/6N y^2(x_1z̅+x_3)) ∫_-∞^∞ H_μ(s) e^-1/2s^2+isw ds, where w=√(π)/y√(3N v)(x_1 z̅+x_3). By <cit.> we have ∫_-∞^∞ H_μ(s) e^-1/2s^2+isw ds = i^μ√(2π) w^μ e^-w^2/2, which holds for complex w by analytic continuation. Thus g(x_3) = i^μ/√(6N)π3N^μ/2(vy^2)^-1-μ/2(x_1z̅+x_3)^μexp( -π/6Nvy^2|x_1 z+x_3|^2 ). The result follows. §.§ A theta series of rank 3 Combining the theta series of rank 1 with the theta series of rank 2 amounts to setting L = N⊕ 12N⊕ 6N, L' = ⊕ N⊕ 6N, L^* = ⊕⊕ 6. Then L^* is dual to L with respect to the bilinear form ⟨ x,y ⟩ = x_2y_2 - 2x_1y_3 - 2x_3y_1/12N of signature (2,1) associated to Q = 1/12N([ -2; 1 ; -2 ]). For each λ≥ 0 we have the Hermite identity (x-iy)^λ = ∑_μ=0^λλμ(-i)^μ H_λ-μ(x)H_μ(y) which follows easily from the generating function identity (<ref>). Let f_3(x) = (x_1-ix_2-x_3)^λexp(-π/12N(2x_1^2+x_2^2+2x_3^2)) = π3N^-λ/2∑_μ=0^λλμ(-i)^μ f_2,λ-μ,1(x_1,x_3)f_1,μ(x_2). It follows that f_3 has the spherical property in weight λ+1/2. The next lemma is similar to the construction in Example 3 of <cit.>. Suppose that f has the spherical property in weight λ+1/2, and define θ_3(z) = ∑_h∈ L'/Lψ̅(h_1) χ_12(h_2) θ(z,f,h). Then for each γ = abcd∈Γ_0(N) we have θ_3(γ z) = ν_N(γ)ψ̅(d)(cz+d)^λ+1/2θ_3(z). By Proposition <ref> we have θ_3(γ z) = i^-1/2( c) (cz+d)^λ+1/2∑_k∈ L^*/Lθ(z,f,k) ∑_h∈ L'/Lψ̅(h_1)χ_12(h_2) c(h,k)_γ. We employ the splitting (<ref>), together with Lemma <ref> and the proof of Lemma <ref> to evaluate c(h,k)_γ. The only terms that are nonzero are those with k∈ L'/L. For such k we have i^-1/2( c)∑_h∈ L'/L ψ̅(h_1) χ_12(h_2) c(h,k)_γ = ∑_(h_1,0)∈ L_2'/L_2δ_k_1,ah_1ψ̅(h_1) × i^-1/2( c)∑_Nh_2∈ L_1'/L_1χ_12(h_2) c_1(h,k)_γ = ψ̅(dk_1)ν_N(γ) χ_12(k_2). The lemma follows. For w=ξ+iy∈$̋ define ϑ^*(z,w) = y^-λ∑_h∈ L'/Lψ̅(h_1) χ_12(h_2) θ(z,σ_w f_3,h), where the action ofg∈_2()on functions is given bygf(x) = f(g^-1x), and the action ofgonx∈^3is given by x_11/2 x_21/2 x_2x_3↦ gx_11/2 x_21/2 x_2x_3g^T. By (<ref>) the Weil representation commutes with the action ofSO(Q), that is, the group of matrices leaving⟨·, ·⟩invariant. Since the action (<ref>) gives an isomorphism ofSO(Q)with_2(), the functionσ_w f_3has the spherical property. Since ⟨ gx, gx⟩=⟨ x,x⟩ we have ϑ^*(z,w) = v^1-λ/2y^-λ∑_x∈ L'ψ̅(x_1) χ_12(x_2) e( 12u⟨ x,x ⟩) f_3(√(v) σ_w^-1x). It is straightforward to verify the relations σ_γ w = γσ_w κ((cw+d)), γ = abcd ∈_2(), and f_3(κ(ϕ)x) = e^2iλϕf_3(x). Thus f_3(κ((cw+d))^-1x) = cw̅+d|cw+d|^2λ f_3(x). Furthermore, forγ∈Γ_0(6N)the mapx↦γ xleaves the latticeL'and the quantityχ_12(x_2)invariant and mapsψ̅(x_1)toψ^2(d)ψ̅(x_1). It follows that ϑ^∗(z,γ w) = ψ̅^2(d) cw̅+d|cw+d|^2λ(γ w)^-λy^λϑ^∗(z,w) = ψ^2(d) (cw̅+d)^2λϑ^∗(z,w). As in <cit.> and <cit.>ϑ^*(z,w)is not the correct theta kernel; instead we will use ϑ(z,w): = N^-λ/2-1/4(-iz)^-λ-1/2(6N)^-λw̅^-2λϑ^*(-1/Nz,-1/6Nw). Then by Lemma <ref>, Corollary <ref> and equation (<ref>) we have ϑ(·,w) ∈ℳ_λ+1/2N, ψν ϑ(z, ·) ∈ℳ_2λ6N, ψ^2. The following lemma provides a useful expression forϑ(z,w)on the imaginary axisw=iy. We have ϑ(z,iy) = ∑_μ=0 μ even^λ c_μ y^1-μ∑_g∈ψ̅(-g)g^λ-μ ×∑_γ∈Γ_∞\Γ_0(N)ψ̅(γ) (cz+d)^-λ-1/2exp(-6π y^2 g^2/(γ z))ν^-1(γ) (γ z)^μ-λϑ_1,μγ zN, where c_μ = λμ 6^λ-μ+1/23π^μ/2 N^λ/2-μ/2+1/4. Recall that ϑ_1,μ=0 for odd μ. Since σ_iy^-1x = (x_1/y, x_2, yx_3) we have ϑ^*(z,iy) = π3N^-λ/2 y^-λ∑_μ=0 μ even ^λλμ(-i)^μϑ_1,μ(z) ϑ_2,λ-μ(z,y). By Lemma <ref> we have ϑ_2,λ-μ(-1/Nz,y) = π3N^λ-μ/2i^λ-μv^μ-λ/y^λ-μ+1√(6N)z^λ-μ ×∑_x_1,x_3∈ψ̅(-x_1)(x_1+Nx_3z̅)^λ-μexp(-π|x_1+Nx_3z|^2/6N^2vy^2). Let g=(x_3)(x_1,Nx_3) and write Nx_3=gc and x_1 = gd. Then the latter sum equals ∑_g∈ψ̅(-g)g^λ-μ∑_N| c≥ 0 (c,d)=1ψ̅(d) (cz̅+d)^λ-μexp(-π g^2|cz+d|^2/6N^2vy^2) = ∑_g∈ψ̅(-g)g^λ-μ∑_γ∈Γ_∞\Γ_0(N)ψ̅(γ) (cz̅+d)^λ-μexp(-π g^2/6N^2(γ z)y^2), where γ = **cd. By Lemma <ref> we have ϑ_1,μ(-1/Nz) = i^-1/2z^μ+1/2ϑ_1,μ(z/N). Since ϑ_1,μ(z/N) transforms like η(z) in weight μ+1/2 we have v^μ-λ(cz̅+d)^λ-μϑ_1,μ(-1/Nz) = i^-1/2z^μ+1/2ν^-1(γ) (γ z)^μ-λ (cz+d)^-λ-1/2ϑ_1,μγ zN. It follows that (-iz)^-λ-1/2ϑ^*(-1/Nz,iy) = (-1)^λ/√(6N)∑_μ=0 μ even^λλμπ3N^-μ/2 y^-2λ+μ-1∑_g∈ψ̅(-g)g^λ-μ ×∑_γ∈Γ_∞\Γ_0(N)ψ̅(γ) (cz+d)^-λ-1/2exp(-π g^2/6N^2(γ z)y^2)ν^-1(γ) (γ z)^μ-λϑ_1,μγ zN. From here it is straightforward to obtain (<ref>). §.§ Further properties of the theta kernel Here we assume that(N, 6)=1and take H_6N:= 0 -1/√(6N) √(6N) 0, W_p^6N= √(p)α1/√(p)6Nβ/√(p)√(p), pα-6Nβ/p=1, p∈{2, 3}. Recalling the notation (<ref>), the definition (<ref>) can be written in the form ϑ(z, w)=ϑ^*(z, w)𝒲_N, λ+1/2^*_2λH_6N where the first operator acts onzand the second onw. The following result describes the action ofU_pandW_p^6Non these theta functions forp∈{2, 3}. It will be important to determining the properties of the lifts at these primes. Suppose that (N, 6)=1. For p ∈{2,3} the following are true (where all operators act on the variable w). ϑ̅(̅z̅,̅w̅)̅ U_p=p^λ-1ψ(p)ϑ̅(̅z̅,̅w̅)̅, ϑ̅(̅z̅,̅w̅)̅_2λ W_p^6N=-ψ(p)ϑ̅(̅z̅,̅w̅)̅, ϑ̅(̅z̅,̅w̅)̅_2λH_6N U_p=p^λ-1ψ̅(p)ϑ̅(̅z̅,̅w̅)̅_2λH_6N, ϑ̅(̅z̅,̅w̅)̅_2λH_6N_2λ W_p^6N=-ψ̅(p)ϑ̅(̅z̅,̅w̅)̅_2λH_6N. Since 𝒲_N, λ+1/2 acts on z, it commutes with the operators in w. By (<ref>) we have f̅ U_p=f̅ ̅U̅_̅p̅ for any f, and ^*_2λH_6N is an involution. So in order to prove the proposition it will suffice by (<ref>) to prove the equivalent statements ϑ^*(z,w)^*_2λH_6N U_p=p^λ-1ψ̅(p)ϑ^*(z,w)^*_2λH_6N, ϑ^*(z,w)^*_2λH_6N^*_2λ W_p^6N=-ψ̅(p)ϑ^*(z,w)^*_2λH_6N, ϑ^*(z,w) U_p=p^λ-1ψ(p)ϑ^*(z,w), ϑ^*(z,w)^*_2λ W_p^6N=-ψ(p)ϑ^*(z,w). We begin with a lemma. We have ϑ^*(z,w)^*_2λ H_6N = y^-λv^1-λ/2ψ(6)∑_x∈ L'ψ̅x_3Nχ_12(x_2) e12u⟨ x,x ⟩ f_3√(v) σ_w^-1x. From (<ref>) we have H_6Nx=x_36N,-x_2,6Nx_1, from which it follows that H_6NL'=L'. From (<ref>), (<ref>) and (<ref>), we obtain ϑ^*(z,w)^*_2λ H_6N = (√̅(̅6̅N̅)̅w̅)^-2λH_6Nw^-λv^1-λ/2∑_x∈ L'ψ̅(x_1) χ_12(x_2) e12u⟨ x,x ⟩ f_3√(v) σ_H_6Nw^-1x = y^-λv^1-λ/2∑_x∈ L'ψ̅(x_1) χ_12(x_2) e12u⟨ x,x ⟩ f_3√(v) σ_w^-1 H_6N^-1x, and the lemma follows (using (<ref>) and (<ref>)) after replacing x by H_6Nx. Let p∈{2, 3}. We first consider the statements involving U_p. Define γ_j:=1/√(p)j/√(p)0√(p), so that ϑ^*(z,w) U_p=1p∑^p-1_j=0ϑ^*(z,γ_jw). For each j, we have σ_γ_j w=γ_jσ_w, Im(γ_jw)=yp, and ⟨γ_jx,γ_jx ⟩=⟨ x,x ⟩. We find that γ_jx=x_1+jx_2+j^2x_3p,x_2+2jx_3,p x_3, γ^-1_jx=p x_1-j x_2+j^2 x_3p,x_2-2jx_3p,x_3p, γ^-1_jL'={x ∈⊕ N⊕6Np: x_1+jx_2+j^2x_3 ≡ 0 p}. From these facts and (<ref>) we obtain ϑ^*(z,γ_jw) =yp^-λv^1-λ/2∑_x∈ L'ψ̅x_1χ_12(x_2) e12u⟨ x,x ⟩ f_3√(v) σ_w^-1γ^-1_jx =p^λψ(p)y^-λv^1-λ/2∑_x ∈γ^-1_jL'ψ̅(x_1) χ_12(x_2+2jx_3) e12u⟨ x,x ⟩ f_3√(v) σ_w^-1x. Let F_z,w(x) = y^-λv^1-λ/2ψ̅(x_1) e12u⟨ x,x ⟩ f_3(√(v) σ_w^-1x) for the moment; then ϑ^*(z,w) U_p = p^λ-1ψ(p) ∑_x_1∈ x_2 ∈ N x_3 ∈ (6N/p) F_z,w(x) ∑_j p x_1+jx_2+j^2x_3 ≡ 0 pχ_12(x_2+2jx_3). The inner sum is periodic in x_1 modulo p, in x_2 modulo 12, and in x_3 modulo 6, so we can compute its value in every case. We find that the inner sum equals zero unless x_3≡ 0 p, in which case it equals χ_12(x_2). Thus we can change the condition x_3∈ (6N/p) to x_3∈ 6N at the cost of multiplying by p; it follows that ϑ^*(z,w) U_p = p^λψ(p) ϑ^*(z,w). To establish (<ref>), write (for the moment) G(z, w)=ϑ^*(z,w)^*_2λ H_6N. Using Lemma <ref>, we find in analogy with (<ref>) that G(z, γ_jw) = p^λψ̅(p) y^-λv^1-λ/2ψ(6)∑_x∈γ_j^-1L'ψ̅x_3Nχ_12(x_2+2jx_3) e12u⟨ x,x ⟩ f_3√(v) σ_w^-1x. The rest of the computation proceeds exactly as above. For x∈ L' and p∈{2,3} we find using (<ref>) that W_p^6Nx=(x_1', x_2', x_3') =pα^2 x_1+α x_2+x_3p,12Nαβ x_1+pα+6Nβpx_2+2x_3,36N^2β^2p x_1+6Nβ x_2+px_3. We have x'_2 ≡ (2pα-1)x_212 and (since pα≡ 1 N) we have x'_1 ≡α x_1 N. We also have W_p^6NL'=L': the containment W_p^6N L' ⊆ L' is immediate, and since W_p^6N is an involution we get the other containment. Arguing as in the proof of Lemma <ref> gives ϑ^*(z,w)_2λ^*W_p^6N = y^-λv^1-λ/2∑_x∈ L'ψ̅(α x_1) χ_12(2pα-1)x_2 e12u⟨ x,x ⟩ f_3√(v) σ_w^-1x. From (<ref>) we see that 2pα-1≡ 712 if p=2, 512 if p=3, which gives (<ref>). An analogous argument using Lemma <ref> shows that ϑ^*(z,w)^*_2λ H_6N^*_2λ W_p^6N=v^1-λ/2 y^-λψ(6)∑_x ∈ L'ψ̅px_3Nχ_12(2pα-1)x_2e12u⟨ x,x ⟩f_3√(v)σ^-1_wx, which gives (<ref>) and finishes the proof of Proposition <ref>. § LIFTR61 §.§ Fourier expansion and transformation properties of the Shimura lift Here we prove a version of the main theorem in which we do not require(N,6)=1. Recall that for F(z) = ∑_n≡ r24 a(n) q^n/24∈ S_λ+1/2(N,ψν^r), we have the lift _t(F) = ∑_n=0^∞ b(n) q^n, whereb(n)is defined as in (<ref>) by ∑_n=1^∞b(n)/n^s = Ls-λ+1,ψ∙ t∑_n=1^∞χ_12(n)a(tn^2)/n^s. Let r be an integer with (r,6)=1 and let t be a squarefree positive integer. Suppose that λ,N∈^+ and let ψ be a Dirichlet character modulo N. If λ≥ 2 then _t(F)∈ S_2λ(6N, ψ^2), while if λ = 1 then _t(F)∈ M_2λ(6N, ψ^2) and _t(F) vanishes at ∞. Furthermore, the Hecke equivariance (<ref>) holds. We begin by assuming thatr=1. Suppose that F(z) = ∑_n≡ 124 a(n) q^n/24∈ S_λ+1/2(N,ψν). We may assume by (<ref>) thatψ(-1)=(-1)^λ. Define Φ(w) = c^-1∫_Γ_0(N)\ v^λ+1/2 F(z) ϑ(z,w) dudv/v^2, wherec:= 2(-12)^λ N^1/4 + λ/2. We will show thatΦ=_1(F). The proof of Proposition 2.8 of <cit.>, with only cosmetic changes, shows that the integral definingΦ(w)converges absolutely. The integral is well-defined because of (<ref>). By (<ref>) we have Φ(w) ∈ℳ_2λ(6N,ψ^2). Our next aim is to compute the Fourier expansion ofΦ(w). We first examine the behavior ofΦ(iy)asy→∞. By Lemma <ref> we have Φ(iy) = 2(-1)^λ c^-1∑_μ=0 μ even^λ c_μ y^1-μ∑_g=1^∞ψ(g) g^λ-μ ×∑_γ∈Γ_∞\Γ_0(N)ψ(γ)ν(γ) ∫_Γ_0(N)\ v^λ+1/2 (cz+d)^-λ-1/2h(γ z,y) F(z) dudv/v^2, where h(z,y) = v^μ-λe^-6π y^2 g^2/vϑ_1,μzN. Sinceψ(γ)ν(γ) F(z) = (cz+d)^-λ-1/2 F(γ z)we have Φ(iy) = 2(-1)^λ/c∑_μ=0 μ even^λ c_μ y^1-μ∑_g=1^∞ψ(g) g^λ-μ∑_γ∈Γ_∞\Γ_0(N)∫_Γ_0(N)\(γ z)^λ+1/2h(γ z,y) F(γ z) dudv/v^2 = 2(-1)^λ/c∑_μ=0 μ even^λ c_μ y^1-μ∑_g=1^∞ψ(g) g^λ-μ∫_Γ_∞\ v^λ+1/2h(z,y) F(z) dudv/v^2. For fixedvwe have ∫_0^1 h(u+iv,y) F(u+iv) du = ∑_n≡ 124 a(n) e^-π n v/12∫_0^1 h(u+iv,y) enu24 du. By (<ref>), whenμis even we have ϑ_1,μ zN = 2N^μ/2v^-μ/2∑_n=1^∞χ_12(n) H_μ( √( 13π n^2 v)) e(124n^2 z). Thus (<ref>) equals 2 N^μ/2 v^μ/2-λ e^-6π y^2 g^2/v∑_n=1^∞χ_12(n) a(n^2) e^-π n^2v/6 H_μ( √( 13π n^2 v)) and we obtain Φ(iy) = 4(-1)^λ/c∑_μ=0 μ even^λ c_μ N^μ/2 y^1-μ∑_g=1^∞ψ(g) g^λ-μ∑_n=1^∞χ_12(n) a(n^2) ×∫_0^∞ v^μ-3/2 e^-6π y^2 g^2/v-π n^2v/6 H_μ( √( 13π n^2 v)) dv. From this we can show thatΦ(iy)decays polynomially, or better, asy→∞. As y→∞ we have Φ(iy)≪_λ y^-λ. In particular, Φ(i∞)=0. Since F is a cusp form and H_μ(x)≪ (1+x)^μ, there exists a constant α>0, depending only on λ, such that a(n^2) H_μ(√(π n^2 v/3)) ≪ n^α (1+v)^μ/2 for μ≤λ. Thus Φ(iy) ≪_λ,N∑_μ=0^λ y^1-μ∫_0^∞ v^μ-3/2(1+v)^μ/2∑_g=1^∞ g^λ-μ e^-6π y^2 g^2/v∑_n=1^∞ n^α e^-π n^2 v/6 dv. For any A,B>0 we have ∑_m=1^∞ m^A e^-2B m^2≤ e^-B∑_m=1^∞ m^A e^-B m^2≪ e^-B∫_0^∞ x^A e^-Bx^2 dx ≪_A B^-A+1/2e^-B. It follows that if y≥ 1 then Φ(iy) ≪_λ,N∑_μ=0^λ y^1-μ∫_0^∞ v^μ-3/2(1+v)^μ/2 (y^2/v)^μ-λ-1/2 e^-3π y^2/v v^-α+1/2 e^-π v/12 dv ≪_λ,N y^-λ∑_μ=0^λ∫_0^∞ v^λ-α-3/2(1+v)^μ/2 e^-3π/v -π v/12 dv ≪_λ,N y^-λ. Thus Φ(i∞)=0. Now, following the argument in the proof of Proposition 2.15 of <cit.>, we can show thatΦ(iy)decays exponentially asy→∞. There exist complex numbers b(n) and c(-n), n∈, such that Φ(w) = ∑_n>0 b(n) e(nw) + ∑_n>0 c(-n) Γ(1-2λ,4π ny) e(-nw), where Γ(a,z) is the incomplete gamma function (see <cit.>). In particular, as y→∞ we have Φ(iy) ≪ e^-cy for some c>0. As in Theorem 2.14 of <cit.>, we have (y∂/∂ w - λ i)∂/∂w̅Φ = 0 because F is a cusp form. Since Φ(w+1)=Φ(w), there is a Fourier expansion of the form Φ(w) = ∑_n∈ b(n,y)e(nw), for some coefficients b(n,y). Equation (<ref>) implies that b”(n,y) = (4π n - 2λ/y) b'(n,y). A basis of solutions for this differential equation is {1, Γ(1-2λ, -4π n y)}. Thus we have Φ(w) = ∑_n∈ b(n) e(nw) + ∑_n∈ c(n) Γ(1-2λ, -4π n y) e(nw) for some b(n), c(n)∈. Writing β(n,y) = (b(n) + c(n) Γ(1-2λ, -4π n y)) e^-2π n y, we see that ∫_0^1 |Φ(w)|^2 dξ = ∑_m,n∈β(m,y)β̅(n,y) ∫_0^1 e((m-n)ξ) dξ = ∑_n∈ |β(n,y)|^2. Since Φ(iy) → 0 as y→∞, the functions β(n,y) must also have that property. By <cit.>, we have e^-2π n y| Γ(1-2λ, -4π n y) | ≍ |ny|^-2λ e^2π n y. Thus c(n)=0 for n>0 and b(n)=0 for n<0. We also have b(0)+c(0)Γ(1-2λ)=0 since Φ(i∞)=0, so Φ(iy) = ∑_n > 0 b(n) e^-2π n y + ∑_n>0 c(-n) Γ(1-2λ,4π ny) e^2π n y. This, together with (<ref>), shows that Φ(iy) decays exponentially as y→∞. LetΛ(s)denote the Mellin transform Λ(s) = ∫_0^∞ y^sΦ(iy) dy/y. By Lemma <ref>, the integral definingΛ(s)is absolutely convergent for(s)>1. Recall the expression (<ref>) forΦ(iy). We have 2∑_g=1^∞ψ(g)g^λ-μ∫_0^∞ y^s-μ e^-6π y^2 g^2/v dy = v6π^s-μ+1/2Γ(s-μ+12) L(s-λ+1,ψ), from which it follows that Λ(s) = 2(-1)^λ/cL(s-λ+1,ψ) ∑_μ=0 μ even^λ c_μ N^μ/2 (6π)^μ-s-1/2Γ(s-μ+12) ∑_n=1^∞χ_12(n) a(n^2) ×∫_0^∞ v^s/2 e^-π n^2 v/6 H_μ( √( 13π n^2 v)) dv/v. By (<ref>) and a straightforward inductive argument, the latter integral equals 2(-1)^μ3π^s/2n^-s∫_0^∞ t^s-1d^μ/dt^μ e^-t^2/2 dt = 2^-μ/26π^s/2n^-s (s-1)⋯ (s-μ) Γ(s-μ2). Thus Λ(s) = 2(-1)^λ/c6π^s/2 L(s-λ+1,ψ) ∑_n=1^∞χ_12(n)a(n^2)/n^s ×∑_μ=0 μ even^λ c_μ N^μ/2 2^-μ/2 (6π)^μ-s-1/2 (s-1)⋯ (s-μ) Γ(s-μ+12) Γ(s-μ2). We have (s-1)⋯ (s-μ) Γ(s-μ+12) Γ(s-μ2) = 2^μ+1-s√(π) Γ(s), so, using that∑_μ=0^⌊λ/2⌋λ2μ = 2^λ-1, we conclude that Λ(s) = (2π)^-sΓ(s) L(s-λ+1,ψ) ∑_n=1^∞χ_12(n)a(n^2)/n^s. Taking the inverse Mellin transform of (<ref>), we find that Φ(iy) = ∑_n>0b̃(n) e^-2π n y for some coefficientsb̃(n). By Lemma <ref> and the lemma on page 89 of <cit.>, we see thatb(n) = b̃(n)andc(n) = 0for alln. ThusΦis holomorphic on$̋ and has a Fourier expansion of the form Φ(z) = ∑_n=1^∞ b(n) q^n. It follows that Λ(s) = ∑_n=1^∞ b(n) ∫_0^∞ y^s e^-2π n y dy/y = (2π)^-sΓ(s) ∑_n=1^∞b(n)/n^s. Therefore we have the relationship ∑_n=1^∞b(n)/n^s = L(s-λ+1,ψ) ∑_n=1^∞χ_12(n)a(n^2)/n^s, that is, Φ= _1(F). We are now ready to prove Theorem <ref>. We first show that 𝒮_t(F) ∈ℳ_2λ(6N,ψ^2). When r=t=1, this is (<ref>). In the general case we may assume that t≡ r24 (otherwise _t(F) is identically zero). Then by Corollary <ref> we have F V_t∈ S_λ+1/26N,ψ∙ t ν. We claim that _1(F V_t) = χ_12(t)_t(F) V_t. Indeed, the Fourier coefficients c(n) of _1(F V_t) are given by ∑_n=1^∞c(n)/n^s = Ls-λ+1,ψ∙ t∑_n=1^∞χ_12(n)a(n^2/t)/n^s = χ_12(t)/t^sLs-λ+1,ψ∙ t∑_n=1^∞χ_12(n)a(tn^2)/n^s =χ_12(t)/t^s∑_n=1^∞b(n)/n^s where _t(F)=∑ b(n)q^n. It follows that c(n)=χ_12(t)b(n/t) for all n, which proves (<ref>). We have _t(F) V_t ∈ℳ_2λ(6Nt,ψ^2), so (<ref>) follows by <cit.>. _t(F) is holomorphic on $̋ and vanishes at∞by (<ref>). We next show that𝒮_t(F)is a cusp form whenλ≥ 2or a modular form whenλ=1. SinceFis a cusp form we havea(n)≪ n^λ/2+1/4, so the Fourier coefficientsb(n)of𝒮_t(F)satisfy b(n) = ∑_jk=nψ(j) jtj^λ-1χ_12(k)a(tk^2) ≪_t ∑_jk=n j^λ-1k^λ+1/2≪_t,ϵ n^λ+1/2+ϵ for anyϵ>0. Ifλ≥ 2then for sufficiently smallϵwe haveλ+1/2+ϵ<2λ-1, and a standard argument shows that𝒮_t(F)vanishes at the cusps. Ifλ=1then a similar argument shows that𝒮_t(F)is holomorphic at the cusps. To finish the proof we need only to establish (<ref>), which is the subject of the next result. Let F, N, r and t be as in the statement of Theorem <ref>. For any prime p≥ 5 we have _t(T_p^2F)=12pT_p_t(F). We may assume that r≡ t24. Writing _t(F)=∑ b(n)q^n, we have b(n)=∑_jk=nψ(j) jtj^λ-112ka(tk^2). Write T_p^2F=∑ A(n)q^n/24 as in (<ref>). Our goal is to show that b(pn)+ψ^2(p)p^2λ-1b np=12p∑_jk=nψ(j) jtj^λ-112kA(tk^2). Write n=p^α n' with p∤ n', and for ℓ≥0 define S_ℓ=ψ(p)^α+1-ℓ pt^α+1-ℓ12p^ℓ p^(α+1-ℓ)(λ-1)∑_jk=n'ψ(j) jtj^λ-112ka(tp^2ℓ k^2). A computation shows that the left side of (<ref>) is given by ∑_ℓ=0^α+1S_ℓ+p∑_ℓ=0^α-1S_ℓ. To compute the right side of (<ref>) we consider separately the three terms in the sum (<ref>) defining A(tk^2). After an involved computation we find that these three terms are given by ∑_ℓ=0^αS_ℓ+1+S_0+p∑_ℓ=1^αS_ℓ-1. The proposition follows. This completes the proof of Theorem <ref>. §.§ UPWP For the rest of Section <ref> we assume that(N, 6)=1. Forp∈{2,3}recall the definition (<ref>): _p, r, ψ=-ψ(p)4pr. Suppose that (r, 6)=(N, 6)=1, that t is a positive squarefree integer, and that ψ is a character modulo N. Let F∈ S_λ+1/2(N,ψν^r) and let f:=_t(F). For p∈{2, 3} the following are true. * f U_p=-p^λ-1_p, r, ψ f. * f _2λ W_p^6N=_p, r, ψ f. * f_2λH_6N U_p=-p^λ-1_p, r, ψ f_2λH_6N. * f_2λH_6N _2λ W_p^6N=_p, r, ψ f_2λH_6N. In the case r=1 the proposition follows directly from (<ref>) and Proposition <ref>. In the general case we may assume that t≡ r24 and therefore that r=t. From Corollary <ref> we have FV_t∈ S_λ+1/2Nt, ψ∙ tν. From the first two assertions of the proposition with r=1 together with (<ref>) we obtain _t(F)V_tU_p=-p^λ-1_p, 1, ψ∙ t_t(F)V_t and _t(F)V_t_2λW_p^6Nt=_p, 1, ψ∙ t_t(F)V_t. Note that V_t and U_p commute, that V_t_2λW_p^6Nt=_2λW_p^6NV_t, and that h_1V_t=h_2V_t if and only if h_1=h_2. It follows that _t(F)U_p=-p^λ-1_p, 1, ψ∙ t_t(F) and _t(F)_2λW_p^6N=_p, 1, ψ∙ t_t(F). The first two assertions follow since _p, r, ψ=_p, 1, ψ∙ t. We turn to the second two assertions. From the r=1 case we have _1(FV_t)_2λH_6NtU_p=-p^λ-1_p, 1, ψ∙ t_1(FV_t)_2λH_6Nt and _1(FV_t)_2λH_6NtW_p^6Nt=_p, 1, ψ∙ t_1(FV_t)_2λH_6Nt. By (<ref>) we have _1(FV_t)_2λH_6Nt=_t(F)V_t_2λH_6Nt=_t(F)_2λH_6N. The assertions follow from these facts. §.§ Proof that the lift is a cusp form After Theorem <ref>, we need only to show that the lift of a form of weight3/2which is orthogonal to all theta series is cuspidal. It would be possible to modify the arguments of Cipra to prove this for generalN. Since these are quite involved, we choose instead to give an argument which leverages those results in the case when(N, 6)=1. We make a general statement since the proof does not depend on the weight. Suppose that (r, 6)=(N, 6)=1, that t is a positive squarefree integer, and that ψ is a character modulo N. Let F∈ S_λ+1/2(N,ψν^r) where λ∈, and if λ=1 assume further that F∈ S_3/2^c(N,ψν^r). Then _t(F)∈ S_2λ(6N, ψ^2, _2,r, ψ,_3, r, ψ). Let f=_t(F). After Theorem <ref> and Proposition <ref> we need only to show that f vanishes at all cusps (as mentioned, this has already been shown when λ>1). To this end let f̂=_t(F V_24). By Proposition <ref> we have the relationship f1-U_2V_2-U_3V_3+ U_6V_6=f̂⊗12∙. By Theorem 4.3 and Corollary 4.5 of <cit.> we know that f̂⊗12∙ is a cusp form. We will use the following lemma. Suppose that k, M∈, that p is a prime with p∤ M, and that χ is a character modulo M. Suppose that f∈ℳ_k(pM, χ) and that f is holomorphic on $̋. Suppose further that there exists_pwith * f _k W^pM_p=_p f, * f U_p=-_p p^k/2-1f, and * f-f U_p V_p∈ S_k(p^2M, χ). Thenf∈ S_k(pM, χ). Let the hypotheses be as in the statement. Since p∤ M, it follows from Corollary 3.2 of <cit.> that each cusp of Γ_0(pM) can be represented by a rational number of one of the forms 1/c or 1/pc where p∤ c. The lemma will follow from relating the expansions of f and f U_pV_p at these cusps. For each c there is a positive integer h_c for which we have an expansion of the form f _k10c1=∑_n∈a(n)q_h_c^n, q_h_c:=e^2π i z/h_c. Suppose that p∤ c. By the assumptions and (<ref>) we have f U_pV_p _k10pc1 =-_pp^-1f _k p00110pc1 =-_pp^-1f _k 10c1 p001 =-_pp^k/2-1∑ a(n)q_h_c^pn. Writing W^pM_p=pαδpMβp as in (<ref>), we have f _k10pc1=_p f _k W_p^pM10pc1=_pf _kγ p001, where γ=α+cδδcp+Mβp∈_2(). For any integer j we can write γ=γ'10c11j01 where γ'=**Mβ(1+cj) + c^2j pp - c j p - j M β. Choosing j with j≡ 0 M and cj≡ -1 p (which is possible since p∤ Mc), we can ensure that γ'∈Γ_0(pM). Using (<ref>) and (<ref>), we compute (where ζ_h_c=e^2π i/h_c) f _k10pc1 =_pf _kγ'10c11j01 p001 =_pχ(p)∑ a(n)ζ_h_c^njq_h_c^n _k p001 =_pχ(p)p^k/2∑ a(n)ζ_h_c^njq_h_c^pn. From (<ref>) and (<ref>) we conclude that f-f U_pV_p _k10pc1=_p∑ a(n) [χ(p)ζ_h_c^njp^k/2+p^k/2-1]q_h_c^pn. By assumption, f-f U_pV_p is a cusp form. Since the quantity in brackets is non-zero, it follows that a(n)=0 for all n≤ 0. By (<ref>) and (<ref>) we conclude that f vanishes both at 1/c and at 1/pc. The lemma follows in view of (<ref>). Returning to the proof of Proposition <ref>, consider the formh:=f-f U_2 V_2∈ℳ_2λ(12N, ψ^2). We apply Lemma <ref> tohwithp=3andM=4N. By (<ref>) we have h-h U_3V_3=f1-U_2V_2-U_3V_3+ U_6V_6∈ S_2λ(36N, ψ^2), so the third condition in Lemma <ref> is satisfied. Since 2001W^12N_3=W^6N_32001, Proposition <ref> gives h _2λW^12N_3 =f _2λW^12N_3+12_2, r, ψf_2λ2001_2λW_3^12N =f _2λW^6N_3+12_2, r, ψf _2λW_3^6N_2λ2001=_3, r, ψ h. SinceU_3commutes withU_2andV_2we haveh U_3=-3^λ-1_3, r, ψh. Lemma <ref> shows thath∈ S_2λ(12N, ψ^2). A second application of the lemma withp=2andM=3Nshows thatf∈ S_2λ(6N, ψ^2)as desired. §.§ Proof that the lifts are new After Proposition <ref> and Theorem <ref> it suffices to prove that the lifts are new at2and3in order to finish the proof of Theorem <ref>. LetF,N,randtbe as in the statement, and letf=_t(F)∈ S_2λ6N, ψ^2. From Lemma <ref> we find forp∈{2,3}that ^6N_6N/pf = f+ψ̅^2(p)p^1-λf _2λW_p^6NU_p, ^6N_6N/pf_2λH_6N = f_2λH_6N+ψ^2(p)p^1-λf_2λH_6N_2λW_p^6NU_p. By Proposition <ref> both of these expressions are zero, and we conclude by (<ref>) that_t(F)is new at2and3. This concludes the proof of Theorem <ref>. § THE SHIMURA LIFT WHEN (R,6)=3 In this section we sketch the proof of Theorem <ref>. Since the construction of the theta kernel and the Shimura lift in the case(r,6)=3are similar to the case(r,6)=1we will omit most of the details. As before we begin with a proposition describing the transformation properties of the lift. Recall that for F(z) = ∑_n≡r/38 a(n) q^n/8∈ S_λ+1/2(N,ψν^r), the lift is given by _t(F) = ∑_n=0^∞ b(n) q^n, where theb(n)are defined as in (<ref>) by ∑_n=1^∞b(n)/n^s = Ls-λ+1,ψ∙ t∑_n=1^∞χ_-4(n)a(tn^2)/n^s. Hereχ_-4=-4∙andψ(-1)=-1r(-1)^λ(recall (<ref>)). In the next two subsections we sketch the proof of the analogue of Theorem <ref>. Let r be an integer with (r,6)=3 and let t be a squarefree positive integer. Suppose that λ,N∈^+ and let ψ be a Dirichlet character modulo N. If λ≥ 2 then _t(F)∈ S_2λ(2N, ψ^2), while if λ = 1 then _t(F)∈ M_2λ(2N, ψ^2) and _t(F) vanishes at ∞. Furthermore, the Hecke equivariance (<ref>) holds. §.§ The theta kernel We begin with the rank 3 lattices L = N⊕ 4N⊕ 2N, L' = ⊕ N⊕ 2N, L^* = ⊕⊕ 2. Note thatL^*is dual toLwith respect to the bilinear form ⟨ x,y ⟩ = x_2y_2 - 2x_1y_3 - 2x_3y_1/4N of signature(2,1)associated toQ = 1/4N([ -2; 1 ; -2 ]). Let f_3(x) = (x_1-ix_2-x_3)^λexp(-π/4N(2x_1^2+x_2^2+2x_3^2)) = πN^-λ/2∑_μ=0^λλμ(-i)^μ f_2,λ-μ,1(x_1,x_3)f_1,μ(x_2), where f_1,μ(x_2) = H_μ(√(πN) x_2) e^-π4N x_2^2, f_2,μ,y(x_1,x_3) = H_μ( √(πN) (y^-1x_1-yx_3) ) exp-π(y^-2x_1^2+y^2x_3^2)2N. By slightly modifying the proofs of Lemmas <ref>, <ref>, and <ref>, we obtain the following analogue of Lemma <ref>. Suppose that f has the spherical property in weight λ+1/2. For h∈ L'/L, write h=(h_1,Nh_2,2Nh_3) and define θ_3(z) = ∑_h∈ L'/Lψ̅(h_1) χ_-4(h_2) θ(z,f,h). Then for each γ = abcd∈Γ_0(N) we have θ_3(γ z) = ν_N^3(γ)ψ̅(d)(cz+d)^λ+1/2θ_3(z). Forw=ξ+iy∈$̋ define ϑ^*(z,w) = y^-λ∑_h∈ L'/Lψ̅(h_1) χ_-4(h_2) θ(z,σ_w f_3,h). For γ∈Γ_0(2N) the map x↦γ x leaves the lattice L' and the quantity χ_-4(x_2) invariant and maps ψ̅(x_1) to ψ^2(d)ψ̅(x_1). In analogy with (<ref>) we have ϑ^∗(z,γ w) = ψ^2(d) (cw̅+d)^2λϑ^∗(z,w) for γ= abcd∈Γ_0(2N). The theta kernel is ϑ(z,w) = N^-λ/2-1/4(-iz)^-λ-1/2(2N)^-λw̅^-2λϑ^*(-1/Nz,-1/2Nw), which satisfies ϑ(·,w) ∈ℳ_λ+1/2 (N, ψν^3), ϑ(z,·) ∈ℳ_2λ(2N,ψ^2). In analogy with Lemma <ref>, ϑ(z,w) takes the following shape on the imaginary axis. We have ϑ(z,iy) = ∑_μ=0 μ odd^λ c_μ y^1-μ∑_g∈ψ̅(-g)g^λ-μ ×∑_γ∈Γ_∞\Γ_0(N)ψ̅(γ) (cz+d)^-λ-1/2exp(-2π y^2 g^2/(γ z))ν^-3(γ) (γ z)^μ-λϑ_1,μγ zN, where ϑ_1,μ(z) = v^-μ/2∑_x_2∈χ_-4(x_2) H_μ(√(π N v) x_2)e( 18Nx^2z) and c_μ = iλμ 2^λ-μ+1/2π^-μ/2 N^λ/2-μ/2+1/4. The function ϑ_1,μ is zero whenever μ is even. We begin by using (<ref>) to decompose ϑ^*(z,iy) as ϑ^*(z,iy) = πN^-λ/2 y^-λ∑_μ=0 μ odd^λλμ(-i)^μϑ_1,μ(z) ϑ_2,λ-μ(z,y), where ϑ_2,μ(z,y) is defined similarly to its counterpart in Section <ref>. By a computation similar to that in the proof of Lemma <ref> we have Nπ^λ-μ/2 y^λ-μ+1√(2N)/i^λ-μv^μ-λz^μ-λϑ_2,λ-μ(-1/Nz,y) = ∑_x_1,x_3∈ψ̅(-x_1)(x_1+Nx_3z̅)^λ-μexp(-π|x_1+Nx_3z|^2/2N^2vy^2) = ∑_g∈ψ̅(-g)g^λ-μ∑_γ∈Γ_∞\Γ_0(N)ψ̅(γ) (cz̅+d)^λ-μexp(-π g^2/2N^2(γ z)y^2), where γ = **cd. As in Lemma <ref> we have ϑ_1,μ(-1/Nz) = i^-3/2z^μ+1/2ϑ_1,μ(z/N). Since ϑ_1,μ(z/N) transforms like η^3(z) in weight μ+1/2 we have v^μ-λ(cz̅+d)^λ-μϑ_1,μ(-1/Nz) = i^-3/2z^μ+1/2ν^-3(γ) (γ z)^μ-λ (cz+d)^-λ-1/2ϑ_1,μγ zN. It follows that (-iz)^-λ-1/2ϑ^*(-1/Nz,iy) = (-1)^λ i/√(2N)∑_μ=0 μ odd^λλμπN^-μ/2 y^-2λ+μ-1∑_g∈ψ̅(-g)g^λ-μ ×∑_γ∈Γ_∞\Γ_0(N)ψ̅(γ) (cz+d)^-λ-1/2exp(-π g^2/2N^2(γ z)y^2)ν^-3(γ) (γ z)^μ-λϑ_1,μγ zN. From here it is straightforward to obtain (<ref>). §.§ The Shimura lift We begin with the case when r=3 and t=1. Suppose that F(z) = ∑_n≡ 1 8 a(n) q^n/8∈ S_λ+1/2(N,ψν^3), and define Φ(w) = c^-1∫_Γ_0(N)\ v^λ+1/2 F(z) ϑ(z,w) dudv/v^2, where c = 12i(-4)^λ+1 N^1/4 + λ/2. As before, the integral defining Φ(w) converges absolutely and by (<ref>) we have Φ(w)∈ℳ_2λ2N, ψ^2. Following the analogous computation in Section <ref>, we find that Φ(z) = ∑_n=1^∞ b(n) q^n, where ∑_n=1^∞b(n)/n^s = L(s-λ+1,ψ) ∑_n=1^∞χ_-4(n)a(n^2)/n^s. The form Φ(z) is the t=1 Shimura lift _1(F); this proves Theorem <ref> when r=3 and t=1. For the remaining cases we use the fact that 𝒮_1(F V_t) = χ_-4(t) 𝒮_t(F) V_t. The remainder of the proof of Theorem <ref> follows the proof of Theorem <ref>. In particular, the proof of Hecke equivariance uses a direct analogue of Proposition <ref>. §.§ Proof of Theorem <ref> The next result can be proved using the method of Proposition <ref>. Suppose that N is odd. Then the following are true (where all operators act on the variable w). ϑ̅(̅z̅,̅w̅)̅ U_2=2^λ-1ψ(2)ϑ̅(̅z̅,̅w̅)̅, ϑ̅(̅z̅,̅w̅)̅_2λ W_2^2N=-ψ(2)ϑ̅(̅z̅,̅w̅)̅, ϑ̅(̅z̅,̅w̅)̅_2λH_2N U_2=2^λ-1ψ̅(2)ϑ̅(̅z̅,̅w̅)̅_2λH_2N, ϑ̅(̅z̅,̅w̅)̅_2λH_2N_2λ W_2^2N=-ψ̅(2)ϑ̅(̅z̅,̅w̅)̅_2λH_2N. This can be used in turn to prove the analogue of Proposition <ref>. Suppose that N is odd, that (r, 6)=3, that t is a positive squarefree integer, and that ψ is a character modulo N. Let F∈ S_λ+1/2(N,ψν^r) and let f=_t(F). Then we have the following: * f U_2=-2^λ-1_2, r, ψ f. * f _2λ W_2^2N=_2, r, ψ f. * f_2λH_2N U_2=-2^λ-1_2, r, ψ f_2λH_2N. * f_2λH_2N _2λ W_2^2N=_2, r, ψ f_2λH_2N. Let F, N, r, and t be as in the statement of Theorem <ref> and let f=_t(F). With f̂=_t(f V_8), Proposition <ref> gives the relationship f1-U_2V_2=f̂⊗-4∙. It follows from Theorem <ref> and Cipra's work that f1-U_2V_2∈ S_2λ4N,ψ^2, and using Lemma <ref> we conclude that f∈ S_2λ2N,ψ^2. As in Section <ref> we find that f is new at 2. Together, these facts complete the proof of Theorem <ref>. § QUADRATIC CONGRUENCES In this section, we prove Theorems <ref> and <ref> using a generalization of the arguments of <cit.>. §.§ Background on modular Galois representations We summarize some facts about modular Galois representations. See <cit.> and <cit.> for more details. We begin with some notation. Let k be an even integer and N be a positive integer. Throughout, let ℓ≥ 5 be a prime such that ℓ∤ N. Let ⊆ be the algebraic closure of in . If p is prime, then let _p be a fixed algebraic closure of _p and fix an embedding ι_p:↪_p. The embedding ι_ℓ allows us to view the coefficients of forms in S_k(N) as elements of _ℓ, and for each prime p, the embedding ι_p allows us to view G_p:=(_p/_p) as a subgroup of G_:=(/). For any finite extension K/, let G_K:=(K̅/K). If I_p⊆ G_p is the inertia subgroup, then we denote the coset of absolute Frobenius elements above p in G_p/I_p by Frob_p. We denote by χ_ℓ :G_→^×_ℓ and ω_ℓ :G_→𝔽^×_ℓ the ℓ-adic and mod ℓ cyclotomic characters, respectively. We let ω_2,ω'_2:I_ℓ→𝔽^×_ℓ^2 denote Serre's fundamental characters of level 2 (see <cit.>). Both characters have order ℓ^2-1, and we have ω^ℓ+1_2=ω'^ℓ+1_2=ω_ℓ. The following theorem is due to Deligne, Fontaine, Langlands, Ribet, and Shimura (see also <cit.>). Let f=∑ a(n)q^n∈ S_k(N) be a normalized Hecke eigenform. There is a continuous irreducible representation ρ_f:G_→_2(_ℓ ) with semisimple mod ℓ reduction ρ̅_f:G_→_2(𝔽̅_ℓ ) satisfying the following properties. * If p ∤ℓ N, then ρ_f is unramified at p and the characteristic polynomial of ρ_f(_p) is X^2-ι_ℓ (a(p))X+p^k-1. * If f ∈ S^new Q_k(N), where Q is a prime with Q|| N then we have ρ_f|_G_Q≅(χ_ℓψ * 0 ψ), where ψ:G_Q→^×_ℓ is the unramified character with ψ(_Q)=ι_ℓ (a(Q)). * Assume that 2 ≤ k ≤ℓ+1. * If ι_ℓ (a(ℓ)) ∈_ℓ ^×, then ρ_f |_G_ℓ is reducible and we have ρ_f|_I_ℓ≅(χ_ℓ ^k-1 * 0 1 ). * If ι_ℓ (a(ℓ)) ∉_ℓ ^×, then ρ̅_f_G_ℓ is irreducible and ρ̅_f|_I_ℓ≅ω^k-1_2 ⊕ω'^(k-1)_2. The Galois representations depend on the choice of embedding ι_ℓ :↪_ℓ, but we have suppressed this from the notation. §.§ Suitability Recall the definition of suitability from the introduction. Here we show that suitability holds for many spaces of forms. Suppose that ℓ≥ 5 is prime and that r is an odd integer. Let N be a squarefree, odd, positive integer with ℓ∤ N, and 3∤ N if 3∤ r. Let ψ be a quadratic Dirichlet character modulo N. Let k be an even positive integer. Then (k,ℓ) is suitable for every triple (N,ψ,r) if the following conditions hold: * k ≤ℓ-1, * 2^k-1≢2^± 1ℓ, * k ≠ℓ+1/2, ℓ+3/2, * ℓ+1/(ℓ+1,k-1), ℓ-1/(ℓ-1,k-1)≥ 6. When ℓ > 5k-4, we always have conditions (1), (3) and (4). The final assertion is easy to check. Assume that (r,6)=1 and suppose that f=∑ a(n)q^n∈ S^ 2,3_k(6N,_2,r,ψ,_3,r,ψ) is a normalized Hecke eigenform. It follows from <cit.> that there are four possibilities for the image of ρ̅_f: * ρ̅_f is reducible. * ρ̅_f is dihedral, i.e. ρ̅_f is irreducible but ρ̅_f_G_K is reducible for some quadratic K/. * ρ̅_f is exceptional, i.e. the projective image of ρ̅_f is conjugate to one of A_4, S_4, or A_5. * The image of ρ̅_f contains a conjugate of _2(𝔽_ℓ ). We proceed by ruling out the first 3 cases. By condition (2) and <cit.>, we see that ρ̅_f is irreducible. By condition (3) and <cit.>, we conclude that ρ̅_f is not dihedral. To rule out the exceptional case, it suffices to show that the projective image contains an element of order ≥ 6. Suppose that ι_ℓ (a(ℓ)) ∈^×_ℓ. By Theorem <ref>, we know that ρ_f_I_ℓ≅χ_ℓ ^k-1*01. Since ω_ℓ has order ℓ-1, the projective image of ρ̅_f contains an element of order ≥ℓ-1/(ℓ-1,k-1)≥ 6. If ι_ℓ (a(ℓ)) ∉^×_ℓ, then Theorem <ref> implies that ρ̅_f≅ω^k-1_200ω'^k-1_2. Since ω_2/ω'_2 has order ℓ+1, we conclude by condition (4) that the projective image of ρ̅_f contains an element of order ℓ+1/(ℓ+1,k-1)≥ 6. This completes the proof when (r,6)=1; the result when (r,6)=3 follows in a similar fashion. §.§ Preliminary results We begin by proving the main technical result used in the proof of Theorem <ref>. Suppose that ℓ≥ 5 is prime and that r is an odd integer. Let N be a squarefree, odd, positive integer with 3∤ N if 3∤ r. Suppose that ℓ∤ N and let ψ be a quadratic Dirichlet character modulo N. Let k be an even positive integer such that (k,ℓ) is suitable for (N,ψ,r) and let m ≥ 1 be an integer. If (r,6)=1, then there exists a positive density set S of primes such that if p ∈ S, then p ≡ 1 ℓ^m and T_pf ≡ f ℓ^m for each normalized Hecke eigenform f ∈ S^ 2,3_k(6N,_2,r,ψ,_3,r,ψ). If (r,6)=3, then we have the same result for S^ 2_k(2N,_2,r,ψ). We assume that (r,6)=1; the proof is similar when (r,6)=3. Choose a number field E containing all coefficients of all normalized Hecke eigenforms in S^ 2,3_k(6N,_2,r,ψ,_3,r,ψ). If λ is the prime of E induced by the embedding ι_ℓ then let E_λ be the completion of E at λ with ring of integers 𝒪_λ and ramification index e. By Proposition <ref> and <cit.>, there exists σ∈(/(ζ_ℓ)) such that ρ̅_f(σ) is conjugate to 11-10 for every normalized Hecke eigenform f ∈ S^ 2,3_k(6N,_2,r,ψ,_3,r,ψ). This implies that the characteristic polynomial of ρ_f(σ) is congruent to x^2-x+1 λ. For a positive integer w, we argue as in the proof of <cit.> to conclude that the characteristic polynomial of ρ_f(σ^ℓ^w-1) is congruent to x^2-x+1 λ^w. We then apply the Chebotarev density theorem and Theorem <ref> as in the proof of <cit.>. Our conclusion is that for every positive integer w, there exists a positive density set S_w of primes such that if p ∈ S_w, then p ≡ 1 ℓ^w and we have T_pf ≡ f λ^w for each normalized Hecke eigenform f ∈ S^ 2,3_k(6N,_2,r,ψ,_3,r,ψ). We claim that S^ 2,3_k(6N,_2,r,ψ,_3,r,ψ) has a basis {g_1, …, g_t} consisting of forms whose coefficients are integers. To see this, let {f_1,…,f_s} be the set of normalized Hecke eigenforms in this space. Then there is a basis composed of the f_i and their images under various V_d with d| N (<cit.> describes the interaction between the V_d and the Atkin-Lehner operators). The claim follows by a standard argument (see <cit.> or <cit.>) once we know that this basis is stable under the action of the Galois group. But this is clear, since the Galois action commutes with V_d and since for p∈{2, 3} the Atkin-Lehner eigenvalues at p are determined by the pth Fourier coefficients of the f_i, which are integers by Corollary <ref>. Write g_i=∑^s_j=1∑_d | Nα_i,j,df_j V_d and f_j=∑^t_i=1β_i,jg_i. Enlarge E to contain all of the coefficients α_i,j,d and β_i,j and let π∈𝒪_λ be a uniformizer. Choose c_1≥ 0 such that π^c_1α_i,j,d∈𝒪_λ for all α_i,j,d. Finally, for M ∈^+, let w=eM+c_1. Since the operators T_p and V_d commute when d | N and p ∤ N, it follows for p ∈ S_w with p ∤ N that p ≡ 1 ℓ^M and T_pg_i ≡ g_i λ^eM for i ∈{1,…,t}, which implies that T_pg_i ≡ g_i ℓ^M for i ∈{1,…,t}. Let 𝒪_ℓ⊂ E be the subring of elements which are integral at all primes above ℓ. If we set M=m+c_2, where c_2 ∈^+ is chosen so that ℓ^c_2β_i,j∈𝒪_ℓ for all coefficients β_i,j, then (<ref>) shows that for p ∈ S_w we have p ≡ 1 ℓ^m and T_pf_j ≡ f_j ℓ^m for j ∈{1,…,s}. The result follows. In order to prove Theorem <ref>, we require the following analogue of <cit.>. Let k ∈^+ be even. Suppose that ℓ≥ 5 is prime and that there exists an integer a for which 2^a≡ -2 ℓ. Let m ≥ 1 be an integer. Let N ∈^+ be odd and squarefree with ℓ∤ N, and 3∤ N if 3∤ r. If (r,6)=1, then there exists a positive density set S of primes such that if p ∈ S, then p ≡ -2 ℓ^m and for each normalized Hecke eigenform f= ∑ a(n)q^n∈ S^ 2,3_k(6N, _2,r,ψ,_3,r,ψ), we have T_pf ≡ -(-_2, r, ψ)^ap^k/2-1f ℓ^m. If (r,6)=3, then the same result holds for S^ 2_k(2N, _2,r,ψ). We assume that (r,6)=1; the proof is similar if (r,6)=3. Let E and λ be defined as in the proof of Theorem <ref>, and let w ∈^+. By <cit.> we have 2^ℓ^w-1(a-1)+1≡ -2 ℓ^w. Since _2, r, ψ∈{± 1}, and a and ℓ^w-1(a-1)+1 have the same parity, we may assume that 2^a≡ -2 ℓ^w. It follows from Corollary <ref> that a(2)=-_2, r, ψ2^k/2-1. We apply part (2) of Theorem <ref> and the Chebotarev density theorem as in the proof of <cit.> to conclude that there is a positive density set S_w of primes such that if p ∈ S_w, then p= χ_ℓ(_p) ≡χ_ℓ(^a_2) ≡ 2^a≡ -2 ℓ^w and for all normalized Hecke eigenforms f ∈ S^ 2,3_k(6N, _2,r,ψ,_3,r,ψ), we have a(p) =ρ_f(_p) ≡ρ_f(^a_2) ≡ (-_2, r, ψ2^k/2-1)^a2^a+(-_2, r, ψ2^k/2-1)^a= (-_2, r, ψ)^a2^ak/2-1(2^a+1) ≡ -(-_2, r, ψ)^a p^k/2-1λ^w. Thus if p∈ S_w then for all normalized Hecke eigenforms f ∈ S^ 2,3_k(6N, _2,r,ψ,_3,r,ψ), we have T_pf ≡ -(-_2, r, ψ)^ap^k/2-1f λ^w. We then argue as in the end of the proof of Theorem <ref> to see that for any m ∈^+, there exists w ≥ m such that if p ∈ S_w, then p ≡ -2 ℓ^m and T_pf ≡ -(-_2, r, ψ)^ap^k/2-1f ℓ^m for all normalized Hecke eigenforms f ∈ S^ 2,3_k(6N, _2,r,ψ,_3,r,ψ). §.§ Proofs of Theorems <ref> and <ref> Let ℓ≥ 5 be prime and r be odd. Let m ≥ 1 be an integer. Let λ be a positive integer and N ≥ 1 be an odd, squarefree integer such that ℓ∤ N, and 3 ∤ N if 3 ∤ r. Let ψ be a quadratic Dirichlet character modulo N. Recall that S_λ+1/2(N,ψν^r)_ℓ⊂ S_λ+1/2(N,ψν^r) consists of forms with algebraic coefficients which are integral at all primes above ℓ. Suppose that F(z)=∑_n ≡ r 24 a(n)q^n/24∈ S_λ+1/2(N, ψν^r)_ℓ. Furthermore, if λ=1, suppose that F ∈ S^c_3/2(N,ψν^r). For each squarefree t we have 𝒮_t(F) ∈ S^ 2,3_2λ(6N,_2,r,ψ,_3,r,ψ) if (r,6)=1 and 𝒮_t(F) ∈ S^ 2_2λ(2N,_2,r,ψ) if (r,6)=3. As t ranges over all squarefree integers, there are only finitely many non-zero possibilities for 𝒮_t(F) modulo ℓ^m; let {g_1,…,g_k} be a set of representatives for these possibilities. Let {f_1,…,f_s} be the set of normalized Hecke eigenforms in S^ 2,3_2λ(6N,_2,r,ψ,_3,r,ψ) if (r,6)=1 and in S^ 2_2λ(2N,_2,r,ψ) if (r,6)=3. Write g_j=∑^s_i=1∑_d | Nc_i,j,df_i V_d with c_i,j,d∈. Choose c ≥ 0 such that ℓ^cc_i,j,d is integral at all primes above ℓ for all c_i,j,d. We require two short lemmas. Define χ^(r)= -4/∙ if 3 | r, 12/∙ if 3∤ r. Suppose that p, ℓ≥ 5 are prime, and that r is odd. Let N be a squarefree, odd, positive integer such that p ∤ N, ℓ∤ N, and 3 ∤ N if 3 ∤ r. Let ψ be a quadratic character modulo N and suppose that F ∈ S_λ+1/2(N, ψν^r)_ℓ. Let m ≥ 1 be an integer, and let the forms f_i and the integer c be defined as above. If λ_p is an integer such that T_pf_i ≡λ_pf_iℓ^m+c for all i, then T_p^2F ≡χ^(r)(p)λ_pF ℓ^m . Since T_pf_i ≡λ_pf_i ℓ^m+c for all i, it follows from (<ref>) and the fact that T_p and V_d commute for d | N that T_pg_j ≡λ_pg_j ℓ^m for j ∈{1,…,k} . Thus, for each squarefree t, it follows from (<ref>) and (<ref>) that 𝒮_t(T_p^2F)=χ^(r)(p)T_p𝒮_t(F) ≡χ^(r)(p)λ_p𝒮_t(F) ℓ^m . A standard argument shows that F ≡ 0 ℓ^m_t(F)≡ 0 ℓ^m for all squarefree t. The result follows. The next lemma explains how to produce congruences from Lemma <ref>. Suppose that p, ℓ≥ 5 are prime and that r is odd. Let N be a squarefree, odd, positive integer such that p ∤ N, ℓ∤ N, and 3 ∤ N if 3 ∤ r. Let m ≥ 1 be an integer. Let ψ be a quadratic character modulo N and suppose that F=∑_n ≡ r 24 a(n)q^n/24∈ S_λ+1/2(N,ψν^r)_ℓ. Suppose that there exists α_p∈{± 1} with T_p^2F ≡α_pp^λ-1F ℓ^m . Then we have a(p^2n) ≡ 0 ℓ^m if np =α_p12p -1p^r-1/2ψ(p). This follows from the definition of the Hecke operator in (<ref>), which is valid for p≥ 5 in all cases by the remark following (<ref>). If we write T_p^2F=∑ c(n)q^n/24 and n satisfies the quadratic condition above, then the third term defining c(n) does not contribute and the middle term becomes α_pp^λ-1a(n). We now prove Theorem <ref> and Theorem <ref>. Let c be the integer defined in Lemma <ref>. If (r,6)=1, we apply Theorem <ref> to conclude that there exists a positive density set S of primes such that if p ∈ S, then we have p ≡ 1 ℓ^m, p ∤ 6N and T_pf ≡ f ℓ^m+c for each normalized Hecke eigenform f ∈ S^ 2,3_2λ(6N,_2,r,ψ,_3,r,ψ). If (r, 6)=3 we obtain the same conclusion for S^ 2_2λ2N,_2,r,ψ. For such p, Lemma <ref> implies that T_p^2F ≡χ^(r)(p)F ℓ^m . The result follows from Lemma <ref>. Suppose that ℓ≥ 5 is a prime such that 2^a≡ -2 ℓ for some integer a. If (r,6)=1, then by Theorem <ref>, there exist β∈{± 1} and a positive density set S of primes such that if p ∈ S, then p ≡ -2 ℓ^m, p ∤ 6N, and for each normalized Hecke eigenform f ∈ S^ 2,3_2λ(6N,_2,r,ψ,_3,r,ψ), we have T_pf ≡β p^λ-1f ℓ^m+c. If (r, 6)=3 the same holds for S^ 2_2λ2N,_2,r,ψ. In either case, for such p Lemma <ref> implies that T_p^2F ≡βχ^(r)(p)p^λ-1F ℓ^m . The result follows from Lemma <ref>. We could also prove a theorem similar to <cit.>. For a fixed α∈, we can assume without loss of generality that 𝒪_λ has the property that the polynomial x^2-α x+1 factors in 𝒪_λ with roots α_1 and α_2. If (k,ℓ) is suitable for (N,ψ,r), α≢± 2 ℓ and (r,6)=1, then one could show that there exists a positive density set S of primes such that if p ∈ S, then p ≡ 1 ℓ^m and for each normalized Hecke eigenform f ∈ S^ 2,3_k(6N,_2,r,ψ,_3,r,ψ), we have T_pf ≡ (α_1^ℓ^m-1+α_2^ℓ^m-1)f λ^m. A similar result holds when (r,6)=3. With this tool in hand, we can prove congruences similar to those given in <cit.>. In particular, if α is an integer which satisfies α≢-2 ℓ, then one could show that there is a positive density set S of primes such that if p ∈ S, then p ≡ 1 ℓ^m and a(p^2n) ≡ (α-1)χ^(r)(p)a(n) π^m if n/p=-1/p^r-1/2ψ(p) if 3 ∤ r, -3/p-1/p^r-1/2ψ(p) if 3| r, where π is a prime above ℓ in a large enough number field. § CONGRUENCES FOR COLORED GENERALIZED FROBENIUS PARTITIONS Here we give an extended example which illustrates the use of our main results to prove congruences for the colored Frobenius partitions described in the Introduction. A complete treatment will be the subject of a future paper. As described in the Introduction, a result of Chan, Wang and Yang <cit.> shows that if m is a positive odd integer, then A_m(z):=∏_n ≥ 1(1-q^n)^m∑^∞_n=0 cϕ_m(n)q^n ∈ M_m-1/2m, ∙ m. Here we discuss the case m=5. Letting Δ denote the normalized cusp form of weight 12 on _2(), we define (for primes ℓ≥ 7) F_ℓ = η^-5ℓ T_ℓΔ^5(ℓ^2-1)/24A_5∈ M_5ℓ^2-5ℓ-1/2^!5,∙ 5ν^-5ℓ. Then F_ℓ≡∑ cϕ_5ℓ n+524q^n/24ℓ (with additional work it can be shown that F_ℓ is congruent modulo ℓ to an element of S_(ℓ-2)/25,∙ 5ν^-5ℓ). Here we will discuss only the primes ℓ=7, 11, and 13. When ℓ=7, 11, we find that F_ℓ≡ 0ℓ. In other words we have the congruences cϕ_5(7n+4) ≡ 0 7, cϕ_5(11n+8) ≡ 011. We note that there are similar congruences for cϕ_7 and cϕ_11, which can also be deduced from (1.13)–(1.15) of <cit.>. These are analogues of Ramanujan's well known congruences for p(n). When ℓ=13 the situation is more interesting. Define F̃_13(z):=6η^12(z)/η(5z)+7η^5(5z)η^6(z)+9η^11(5z)= 6q^7/24-65 q^31/24+291 q^55/24+⋯∈ S_11/25,∙ 5ν^7. We find that F_13≡F̃_1313. By Theorem <ref>, we have _7(F̃_13)= 6q-96q^2+486q^3+1536q^4+3376q^5+⋯∈ S_10^ 2, 330, +1, -1. There is a unique newform <cit.> g_30 = q - 16q^2 + 81q^3 + 256q^4 - 625q^5+⋯∈ S_10^30, +1, -1 and a unique newform <cit.> g_6=q - 16q^2 + 81q^3 + 256q^4 + 2694q^5+⋯∈ S_10^6, +1, -1. We find by computing enough coefficients that _7(F̃_13)= 2221537 g_6 - 3544837537 g_6V_5 + 1001537 g_30≡ 6g_6+4g_6V_513. Suppose that Q is a prime with T_Q g_6≡β_Q Q^4g_613 with β_Q∈{±1}. By (<ref>) and the argument in Lemma <ref>, we find that T_Q^2F̃_13≡12Qβ_QQ^4 F̃_1313. Then Lemma <ref> shows that cϕ_513 Q^2 n+524≡ 0 13 if nQ=β_Q-5Q. By computing the eigenvalues of g_6, we find the following congruences for Q<2000 (this table can easily be expanded). 2|c|cϕ_513 Q^2 n+524≡ 0 13 if nQ=_Q ε_Q=1 Q= 103, 109, 283, 727, 769, 809, 991, 1063, 1223, 1231, 1259, 1291, 1307, 1367, 1409, 1543, 1733, 1789, 1831, 1861 ε_Q=-1 Q= 97, 191, 241, 251, 397, 409, 439, 463, 751, 823, 839, 1229, 1277, 1321, 1361, 1621, 1657, 1933, 1979, 1993 As in (<ref>) each of these gives rise to many congruences of the form cϕ_5(13 Q^3 n+β)≡ 013 by selecting n in residue classes modulo 24Q. plain
http://arxiv.org/abs/2307.06236v1
20230712152707
Statistical complexity and connectivity relationship in cultured neural networks
[ "A. Tlaie", "L. M. Ballesteros-Esteban", "I. Leyva", "I. Sendina-Nadal" ]
nlin.AO
[ "nlin.AO" ]
1,2]A. Tlaiecorrau [corrau]Corresponding author [email protected] 1,2]L.M. Ballesteros-Esteban 1,2]I. Leyva 1,2]I. Sendiña-Nadal [1]Complex Systems Group & GISC, Universidad Rey Juan Carlos, 28933 Móstoles, Madrid, Spain [2]Center for Biomedical Technology, Universidad Politécnica de Madrid, 28223 Pozuelo de Alarcón, Madrid, Spain We explore the interplay between the topological relevance of a neuron and its dynamical traces in experimental cultured neuronal networks. We monitor the growth and development of these networks to characterise the evolution of their connectivity. Then, we explore the structure-dynamics relationship by simulating a biophysically plausible dynamical model on top of each networks' nodes. In the weakly coupling regime, the statistical complexity of each single node dynamics is found to be anti-correlated with their degree centrality, with nodes of higher degree displaying lower complexity levels. Our results imply that it is possible to infer the degree distribution of the network connectivity only from individual dynamical measurements. § INTRODUCTION One of the main research lines in the study of the dynamics of complex networks has been the deep relationship between the connectivity and the dynamics of the nodes, and how this interaction shapes the emergence of a collective state such as synchronisation <cit.>. An enormous effort has been devoted to the understanding of this phenomenon, and the knowledge gathered so far has driven the advance in crucial applications in brain dynamics <cit.>, power grids <cit.>, and in many other fields where synchronisation is essential <cit.> for the system's proper functioning. Commonly, studies have focused in states of full synchronisation <cit.>. Nevertheless, there are very relevant cases in which only a partial or weak synchronisation level is achieved <cit.>, and often this state becomes optimal to balance functional integration and segregation in the system <cit.> while a complete coordination is evidencing the existence of a pathological condition. Several investigations <cit.> have shown that nodes play different roles in the ensemble dynamics depending on their topological position and intrinsic dynamics <cit.>. One of the most explored situation is that of the hubs acting as coordinators of the dynamics of the whole system <cit.>, being the first nodes to synchronise among them <cit.> and to the mean field <cit.>, while the rest of the nodes progressively locks the hubs dynamics. This effect of the topology on the dynamics in the weakly synchronised regime opens the question of whether it is possible to infer the network architecture from statistical correlations among the coupled units <cit.>. Currently, a great amount of the research is being conducted in this sense. In particular, the computational neuroscience field roots in the hypothesis that dynamical correlations (which can be recorded in non-invasive ways) are greatly constrained and induced by the anatomical structure of the brain <cit.>. From these site-to-site correlation maps, the functional brain networks, it is often possible to obtain information about the underlying topological networks <cit.>. However, it has been less explored the fact that this structural-dynamical interaction also plays in the other way around: just as the dynamics of each node influences the ensemble, the ensemble imprints its structural marks into the dynamics of each individual node <cit.>. We make the assumption that, long before the coupling strength is high enough to induce synchronisation, the dynamical changes at the node level are encoding the imprint of its structural role. This relevant feature could be used to extract information about the network without making any reference to pairwise correlations, particularly in those cases where the structure is unknown or unreliable, as we showed in a previous work <cit.>. Here, we extend our study of the influence that the ensemble has over the node dynamics to an experimental case. We culture networks of neurons coming from Schistocerca gregaria and study the potential relationship between a simulated dynamical model (the Morris-Lecar neuron) and the anatomical network structure of a neuronal culture. The main motivation for this study is that in cultured neuronal networks the simultaneous obtention of structural and dynamical information is not possible, either because one recording technique influences the other measurement or mainly because the culture is not able to survive to both measurements. § EXPERIMENTAL SETUP: CULTURING THE NETWORK To analyse the spatial structure of a real network, we focus on the study of cultured neuronal networks (CNNs), considered as a simplified version of a more complex network of the central nervous system <cit.>. For this purpose , we analyse the network structure in this CNN model by means of optical microscopy techniques, extracting the detailed connectivity and its statistical topological properties. Our CNNs were obtained from Schistocerca gregaria specimens, also known as desert locusts. As they share basic neuronal features with vertebrates, this invertebrate model has been recently used in neuroscience as an easier approach for the understanding of more complex neural systems <cit.>. The large size of its neurons makes it ideal for observing the structure of the network, as an alternative to the mammals. In our experiments we follow the protocol described in <cit.>. Each locust is dissected to extract its frontal ganglion, formed by approximately 100 neurons <cit.>. To obtain an intermediate neuron density that allows us to study a complex network morphology we extracted 12 ganglia per culture. After the dissection, the frontal ganglia endure a chemical and mechanical procedure to remove all the connections and dissociate the neurons. The neuronal somata are cultured in a Petri dish, in an enriched environment to allow the neurites to regrowth and form a new connectivity network. The cultures are monitored in vitro from day 0 (DIV0, DIV=days in vitro) to day 14 (DIV14). The data used in this work correspond to 6 cultures grown in the same conditions. We inspected the morphological features of the cultured networks using a phase contrast inverted microscope (Eclipse Ti-S, Nikon) with a 10x air objective (Achromat, ADL, NA 0.25) and an automated motorised XYZ stage controller. High resolution images were obtained in a daily basis. In order to analyse the spatial network, we need to extract the corresponding mathematical graph. To do so, we process the culture images by means of an image segmentation algorithm <cit.>. In Fig. <ref> we portray the whole process, starting from a typical microscope image of the culture (Fig. <ref>(a) shows just a small area) and ending up with the output of the segmentation detecting neurons (and aggregates of neurons) and the neuronal processess connecting them Fig. <ref>(a). The algorithm is summarised in the following steps: * The red layer of a RGB high-resolution image of the recorded cultured network is processed (Fig.<ref>(a)). * The image is segmented and thresholded to separate background from foreground areas. Then neurons and aggregates of neurons (red areas in Fig. <ref>(b)) and neurites (green paths in Fig. <ref>(b)) are identified separately . * Both neurons and neurites are connected and coded in the adjacency matrix where single and clustered neurons are the nodes and neurites are the links between them. Branching and end points of the neurites are also registered as junction nodes in the graph, even when there is no neuron on them. This provides a complete version of the graph that we call the full graph (Fig. <ref>(c)) with two types of nodes, those corresponding to neurons and the ones denoting a branching point in the neuronal process path connecting two neurons. * The previous data is used to build a reduced version of the graph, where only neurons (or neuronal clusters) are the nodes, and the links represent the existence of a path between neuronal clusters, eventually through branching points. In this graph, junction nodes have been removed and we observe a more direct path between neuronal clusters, obtaining a simple version of the matrix, the cluster graph (Fig. <ref>(d)). We analyse the morphological and topological properties of the cultured network using both full and cluster graphs, where the links are unweighted. With the purpose of characterising the segregation and integration of the cultured neuronal network along the experiment, in Fig. <ref> we measure the longitudinal progression of the averaged clustering coefficient (C) and shortest path length L, normalised by the size of the largest connected component S_1 <cit.>, resulting L/S_1. The relationship between C and L is often used as an indicator of the balance between the local and long-distance connectivity in the network. These aforementioned parameters were measured in both the full graph (with neuronal clusters and junction nodes) and in the cluster graph (with only neuronal clusters nodes). In the full graph (Fig. <ref> (a)), the normalised shortest path L /S_1 shows a high mean value at the early days of the culturing, where the connectivity is still not fully developed. Between DIV3 and DIV6 there is a significant decrease and showed no significant change thereafter, meaning a high integration degree in the mature culture network. The clustering coefficient was characterised by a very low mean value, showing a slight increase between DIV3 and DIV6, when the network development occurs. The low mean values of C are due to the fact that both neurons and branching points are considered nodes meaning that the probability of forming triangles is reduced (see Fig. <ref>(b)). In the case of the cluster graphs (Fig. <ref> (d)) we observe a similar trend in L /S_1 (see Fig. <ref>(b)) as the one described for the full graph. On the contrary, C exhibits higher mean values, with a more acute increase between DIV 3 and DIV 6, that coincides with the most intense developmental phase. As neuronal aggregates are the only nodes in this cluster graph and junctions are not represented as nodes, the mean values of C are more accurate with the connected structure. After that point, in the mature neuronal network these two parameters keep constant values <cit.>. The analysis of these parameters in both types of graphs concludes with the emergence of a mature cultured neuronal network from an initial random stage. This evolved structure is characterised by high clustering coefficients and low mean path values, indicating the presence of a mature network with high segregation (favored by high clustering values) and integration (facilitated by the existence of small shortest paths) levels. These are the characteristics of a small world structure, where the high tendency to form clusters of nodes in highly interconnected subgroups and short distance between them contribute to an optimal functionality in the network <cit.>. We also analized the degree distribution P(k) of these networks, being k the number of links that each node has. In Fig. <ref> we plot an example of the cumulative degree distribution P_c(k) of an in-vitro clustered network at DIV7 (black squares), compared to equivalent simulated networks (same number of nodes and links) obtained from usual generative models: random Erdös-Renyi (ER, blue diamonds), scale-free obtained by Barabasi-Albert algorithm (SF, red dots) and a spatial network with a distance-dependence linkage pattern (spatial-ER green diamonds). As described in<cit.>, the cultured networks belong to the single-scale type as they show a well defined k. The study of P_cum(k) reveals a fast decay with a large number of nodes with similar number of connections, and a few ones with a different and large node degrees. As it can be seen, the experimental connectivity largely differs from the pure random ER, showing instead shared features between SF and spatial networks. § DYNAMICAL MODEL Once we have extracted the connectivity of real neuronal cultures , we can provide a dynamical behavior to their nodes, in order to enable the exploration of the potential interplay between structure and dynamics. For this study we implemented the bio-inspired Morris-Lecar (ML) model <cit.>, whose equations describing the membrane potential behavior for each unit read <cit.>: C V̇_̇i̇ = -g_ X M_∞( V_i-V_ X )^Ionic channels + qξ_i + σ/K∑_j a_ije^-2(t-t_j) (V_0-V_i)^Synaptic function_I_i + I^ext_i , Ẇ_̇i̇ = ϕ τ_W ( W_∞-W_i ) where V_i and W_i are, respectively, the membrane potential and the fraction of open K^+ channels of the ith neuron and M_∞, W_∞, and τ_W are hyperbolic functions dependent on V_i and ϕ is a reference frequency. The parameters g_ X and V_ X account for the electric conductance and equilibrium potentials of the X={K,Ca,leaky} channels. The external current I_i^ext=50.0 mA is the same for all the neurons and is chosen such that neurons are sub-threshold to neuronal firing which is induced by the white Gaussian noise qξ_i of zero mean and intensity q. The coupling of the neuron ith with the neuron ensemble is described by the injected synaptic current I_i, given by the superposition of all the post-synaptic potentials emitted by the neighbours of node i in the past, being t_j the time of the last spike of node j, and the corresponding element of the adyacency matrix is a_ij=1 if there is a link between nodes i,j and a_ij=0 otherwise. The synaptic conductance σ, normalised by the largest node degree present in the network K, plays the role of coupling intensity. Additionally, the channel voltage-dependent saturation values are given by the following functions: M_∞(V_i) = 1/2[1+tanh(V_i-V_1/V_2)], W_∞(V_i) = 1/2[1+tanh(V_i-V_3/V_4)], τ_W(V_i) = cosh( V_i-V_3/2V_4). We chose the parameters such that the neurons in the simulations corresponded to type II class excitability for the neuron dynamics, which means that a discontinuous transition is found in the dependence of the spiking frequency on the external current. The values for all the parameters can be found in Refs. <cit.>. We have to remark, however, that this model was originally conceived for a single neuron and in this work we are dealing with aggregates of 20 of them as our individual nodes. Although this quantity is not enough for employing a neural mass model, it should be noted that, strictly, it is not a single neuron either. § STATISTICAL CHARACTERISATION OF THE DYNAMICS With the purpose of providing a solid description of the system, we use two different quantities: a global and a local one. The global measure is the synchronisation level of the network, i.e. how similar are the dynamical outputs of our units, while the local one is an individual measure of the complexity of a node's time series. §.§ Synchronization measure In order to quantify the level of synchronization we estimate how many neurons fire within the same time window. The total simulation time T is divided in n=1,…,N_b bins of a convenient size τ, such that T=N_bτ, and the binary quantity B_i(n) is defined such that B_i(n)=1 if the ith neuron spiked within nth interval and 0 otherwise. The synchronisation between the spiking sequences of neurons i and j is therefore characterised with the pairwise correlation matrix s_ij∈ [0,1] s_ij=∑_n=1^N_bB_i(n)B_j(n)/∑_n=1^N_bB_i(n)∑_n=1^N_bB_j(n), where the term in the denominator is a normalisation factor and s_ij=1 means full coincidence between the two spiking series. The ensemble average of s_ij, S=⟨ s_ij⟩ =2/N(N-1)∑_i,j=1, i≠ j^N s_ij is a measure of the global synchronisation in the network. In Fig. <ref>(a) we plot an example of the averaged value of S as a function of σ for an experimental CNN clustered network with N=246 nodes, obtained at DIV7 as explained in Sec. <ref>. A transition from an asynchronous to an almost synchronous firing is observed as the synaptic conductance σ is increased, which confirms that the structure is suitable for inducing synchronisation in the system. §.§ Statistical complexity Once we know the effect that the relationship between network topology and dynamics has on the global state, we explore the effect that the presence of the ensemble has on the single node dynamics by measuring the statistical complexity of the single nodes along the synchronisation process. As the typical neuronal dynamics exhibited by Eq. (<ref>) consists of a sequence of L spikes whose amplitude variability is negligible, we focused on the complexity C_i of the sequence of inter-spike intervals (t_l-t_l-1) (ISI) of each neuron. The ordinal patterns formalism <cit.> associates a symbolic sequence to a series <cit.>, transforming the actual values of the data series into a set of natural numbers. To do that, the ISI series of each neuron is divided in sequences of length D. In each sequence, the data values are ordered in terms of their relative magnitudes <cit.>, which provides the corresponding symbolic sequence. The information content of these sequences is then evaluated as a function of the complexity measure. The complete process is illustrated in Fig. <ref>. This is a broad-field, well-established and known method, statistically reliable and robust to noise, extremely fast in computation and with a clear definition and interpretation in physical terms. It is derived from two also well-established measures (divergence and entropy), also easily interpretable when analysing non linear dynamical systems. In addition, it only requires soft criteria, namely that the time series must be weakly-stationary, i.e., for k≤ D, the probability for ISI_t<ISI_t+k should not depend on time <cit.>, and that M>>D! (where M is the number of points of the entire time series of ISIs), which are easily checkable. We proceed in the following way, as shown in Fig. <ref>: * From each single node time series in the simulated (Morris-Lecar neuron) signal, we detect the spikes (a) and extract the duration between two consecutive spikes (b). * We compute series of the inter-spike time intervals ISI (c) and save them in an array (d), which will be our object of study. * The ISI series is divided in sequences of length D (D = 3 in this illustration, (e)). We compare consecutive points in each sequence and associate a natural number to each of them (f), ranking them based on their relative size. * We count how many times a certain symbolic sequence (or pattern) π of length D appears (N_π). * Then, we define a probability of occurrence for each pattern: P_π = N_π/N_T, where N_T is the total number of sequences of length D in which the time series is divided, i.e. N_T = (L-1)/D, being L the total number of spikes. * We construct a probability distribution, which we call P from now on, from all possible symbolic sequences of length D with probability P_π. Once the probability distribution P is obtained, the statistical complexity is defined. It is a measure that should be minimal both for pure noise and absolute regularity, and provide a bounded value for other regimes. Being this so, we need to characterise the disorder and a correcting term (i.e., a way of comparing known probability distributions with the actual one). The statistical complexity (C), as defined in Ref. <cit.>, is the product of the Permutation Entropy (H) and the Disequilibrium (Q). To define the permutation entropy H, the first step is the evaluation of the Shannon entropy, that gives an idea of the predictability of the series: S[P] = - ∑_j=1^D! p_j ·log(p_j) The permutation entropy corresponds to the normalisation of S with respect to the entropy of the uniform probability distribution, S_max: H = S/S_max, S_max = S[P_e], P_e ≡{ p_i=1/D! }_i=1,...,D! 0 ≤ H ≤ 1 Regarding the disequilibrium Q, it is a way of measuring the distance of the actual probability distribution P with the equilibrium probability distribution P_e. This notion of distance can be acquired by several means; in this work, we adopt the statistical distance given by the Kullback-Leibler <cit.> relative entropy (K): K[P|P_e] = - ∑_j=1^D! p_j ·log(p_e) + ∑_j=1^D! p_j ·log(p_j) = = S[P|P_e] - S[P] where S[P|P_e] is the Shannon cross entropy. If we now symmetrise Eq. (<ref>), we get the Jensen-Shannon divergence (J): J[P|P_e] = (K[P|P_e]+K[P_e|P])/2 →_(*) J[P|P_e] = S[(P+P_e)/2] - S[P]/2 - S[P_e]/2 where (*) is simply the rewritten version in terms of S. Finally, we can write the disequilibrium Q as the normalised version of J as: Q = Q_0 J[P|P_e] with Q_0 = N+1/Nlog(N+1)-2log(2N)+log(N)^-1, implying again 0 ≤ Q ≤ 1. We then just have to multiply H and Q to obtain the Complexity measure: C = H · Q § RESULTS We summarise our results for the statistical complexity in a cultured neuronal network in Fig. <ref>. As commented above, in panel (a) we show as a reference the synchronisation level vs the synaptic conductance σ for the dynamics simulated on top of a DIV7 experimental network. In panel Fig. <ref>(b) we plot the value of ⟨ C⟩_k as a function of the conductance σ for two nodes with high (k=30) and poor (k=3) connectivity, being ⟨ C⟩_k =∑_[i|k_i=k] C_i/N_k, with N_k the number of nodes with degree k. The results evidences that, on that same route to synchronisation, there exists differences between how hubs and peripheral nodes behave due to the presence of the ensemble, even when the global synchronisation level is still very low. The main detail that catches our attention is that the peripheral nodes show a greater complexity than the hubs (σ = 950). To further explore this finding, in the third panel we depict the statistical complexity vs the degree, for this value of σ. We can extract an interesting result here: for σ = 950, there exists an anti-correlation between ⟨ C⟩_k and k. This anti-correlation observed in cultured neuronal cultures is not as evident as the one reported in Ref. <cit.> in SF networks, but taking into account that the structure, which has grown in a limited spatial domain, does not belong to the class of power-law networks (as it was already discussed) and that the model (Morris-Lecar) was not originally designed for this kind of neuronal aggregates, one can conclude that the anti-correlation between the statistical complexity C and the degree k is a quite robust feature. § CONCLUSIONS In Ref. <cit.> we investigated the relationship between the statistical complexity and topology in synthetically generated networks. Here, we focused on the study of real-world topologies, as the ones exhibited by self-organised neuronal cultures. The longitudinal study of the morphology of these networks shows an evolution in the topology from isolated neurons to a percolated heterogeneous topology with small-work properties. In order to study the structure-dynamics interaction in these networks, we simulated a dynamical model (the Morris-Lecar neuron) on top of experimental neuronal networks at the mature developmental stage. We evidenced that, in the weakly coupled regime, it is possible to anti-correlate the individual node statistical complexity of the series of the neuronal inter-spike intervals with the degree of a node. Therefore, it would be possible to infer the degree distribution of the network from node dynamical measurements, which confirms the result obtained for synthetic networks <cit.>. This approach based on the computation of complexity values retrieved from single node dynamics, provides a different perspective than the usual methods of network inference, since it does not imply node-to-node calculations. Additionally, our method does not impose the need of measuring the dynamics of every node: it can be an incomplete measure, an still it will provide their relative roles. We hope this approach will be useful in applications where the knowledge of the degree distribution, instead of the detailed connectome, provides a sufficient insight over an unknown topology and about the functioning of the underlying system. § ACKNOWLEDGEMENTS Financial support from the Ministerio de Economía y Competitividad of Spain under project FIS2017-84151-P and from the Group of Research Excelence URJC-Banco de Santander is acknowledged. A.T. and L.B.-E. acknowledge support from the European Youth Employment Initiative.
http://arxiv.org/abs/2307.07593v1
20230714193337
Mod $\ell$ gamma factors and a converse theorem for finite general linear groups
[ "Jacksyn Bakeberg", "Mathilde Gerbelli-Gauthier", "Heidi Goodson", "Ashwin Iyengar", "Gilbert Moss", "Robin Zhang" ]
math.NT
[ "math.NT", "math.RT", "20C33, 20C20, 11L05, 20G40, 11S40" ]
For q a power of a prime p, we study gamma factors of representations of _n(_q) over an algebraically closed field k of positive characteristic ℓ≠ p. We show that the reduction mod ℓ of the gamma factor defined in characteristic zero fails to satisfy the analogue of the local converse theorem of Piatetski-Shapiro. To remedy this, we construct gamma factors valued in arbitrary ℤ[1/p, ζ_p]-algebras A, where ζ_p is a primitive p-th root of unity, for Whittaker-type representations ρ and π of _n(_q) and _m(_q) over A. We let P(π) be the projective envelope of π and let R(π) be its endomorphism ring and define new gamma factors γ(ρ×π) = γ((ρ⊗_kR(π)) × P(π)), which take values in the local Artinian k-algebra R(π). We prove a converse theorem for cuspidal representations using the new gamma factors. When n=2 and m=1 we construct a different “new” gamma factor γ^ℓ(ρ,π), which takes values in k and satisfies a converse theorem. JWST/CEERS Sheds Light on Dusty Star-Forming Galaxies: Forming Bulges, Lopsidedness and Outside-In Quenching at Cosmic Noon Aurélien Le Bail1 Emanuele Daddi1 David Elbaz1 Mark Dickinson2 Mauro Giavalisco3 Benjamin Magnelli1 Carlos Gómez-Guijarro1 Boris S. Kalita4,5,6 Anton M. Koekemoer7 Benne W. Holwerda8 Frédéric Bournaud1 Alexander de la Vega9 Antonello Calabrò10 Avishai Dekel11 Yingjie Cheng12 Laura Bisigello13,14 Maximilien Franco15 Luca Costantin16 Ray A. Lucas7 Pablo G. Pérez-González16 Shiying Lu1 Stephen M. Wilkins17,18 Pablo Arrabal Haro2 Micaela B. Bagley15 Steven L. Finkelstein15 Jeyhan S. Kartaltepe19 Casey Papovich20,21 Nor Pirzkal22 L. Y. Aaron Yung23NASA Postdoctoral Fellow August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Let _q be a finite field of order q and characteristic p, and let ℓ be a prime different from p. In the ℓ-modular representation theory of finite groups such as _n(𝔽_q), the importance of tools such as Brauer theory and Deligne–Lusztig varieties is well-established <cit.>. In this paper, we investigate a different tool, analogous to a construction in the local Langlands program for p-adic groups: gamma factors. While they first arose in the context of complex representations, they have been fruitful in studying mod-ℓ representations of _n(ℚ_p) (<cit.>). Fix a nontrivial character ψ:_q→^×. Rankin–Selberg gamma factors γ(π×π',ψ) have been defined for pairs π, π' where π is a complex representation of _n(_q), π' is a complex representation of _m(_q), and both π and π' are assumed to be irreducible and ψ-generic <cit.>. In this context, there are converse theorems analogous to the p-adic setting, which describe sets of π' such that γ(π×π',ψ) uniquely determine π. In this paper we construct gamma factors and prove the simplest possible ℓ-modular converse theorem. §.§ Results Let k be an algebraically closed field of characteristic ℓ, now allowing ℓ to possibly be zero. Irreducible complex representations of _n(_q) are defined over ℚ, they admit stable ℤ[1/p]-lattices, and the mod ℓ reductions of such lattices are equivalent up to semisimplification. All irreducible 𝔽_ℓ-representations arise through this reduction mod ℓ procedure. Since the complex gamma factor γ(π×π',ψ) lies in ℤ, a first candidate for the mod ℓ gamma factor is the mod ℓ-reduction red_ℓ(γ(π×π',ψ)). It follows from our mod ℓ functional equation (<ref>) that _ℓ(γ(π×π',ψ)) is indeed the unique gamma factor satisfying the functional equation for _ℓ(π) and _ℓ(π'). When ℓ∤#_n(_q), the mod ℓ representation theory of _n(_q) is essentially no different than the complex setting and our <ref> below implies _ℓ(γ(π×π',ψ)) indeed satisfies a mod ℓ converse theorem under this restriction. However, it fails to satisfy a mod ℓ converse theorem for general ℓ in several examples when n=2. Using SAGE computations (<ref>), we found the mod ℓ converse theorem for _2(_q) fails when (ℓ, q) = (2,5), (2,17), (3,7), (3,19), (5,11), (11,23), (23,47), (29,59), though we verify it holds for all other pairs (ℓ,q) with ℓ≤ 11 and q = p ≤ 23. In all the counterexamples we found, q has the form 2ℓ^k+1, and we conjecture that these are the only cases in which it can fail. In analogy with the results in the p-adic setting in <cit.>, the point of failure in the classical proof is the failure of so-called “L^2-completeness” of the Whittaker space. This raises the question of how to construct a “new” gamma factor for any ℓ≠ p that does satisfy a converse theorem, and which returns the classical gamma factor when ℓ∤#_n(_q). The first step in constructing a “new” gamma factor is establishing a functional equation over arbitrary ℤ[1/p,ζ_p]-algebras A, where ζ_p is a p-th root of unity. Bernstein and Zelevinsky developed a theory of “derivatives” for complex representations of _n(ℱ) <cit.> with respect to a fixed additive character ψ on ℱ. Fixing ψ:_q→ℤ[1/p,ζ_p]^×, Vignéras observed derivatives work equally well for _n(_q)-representations on A-modules. If π is an A[_n(_q)]-module its “i-th derivative” π^(i) is a representation of _n-i(_q), and the restriction π|_P_n to the mirabolic subgroup P_n (matrices with bottom row (0,…,0,1)) is glued from π^(1),…, π^(n) in a simple way. The top derivative π^(n) is equivalent to the (N_n,ψ)-coinvariants, where N_n is the unipotent upper triangular subgroup, to which ψ is extended in a natural way. Thus (π^(n))^∨ is the space of Whittaker models of π (Frobenius reciprocity). The starting point of our construction is to restrict our attention to π of Whittaker type, meaning π^(n) (and hence (π^(n))^∨) is free of rank one over A, which generalizes the “irreducible generic” hypothesis ubiquitous in the A = ℂ case. In particular, this allows one to speak of the Whittaker model 𝒲(π,ψ_A)⊂_N^Gψ_A, where ψ_A = ψ⊗_ℤ[1/p,ζ_p]A. To state our first main result, we define an A[_m(_q)]-module π' to be exceptional for π if there exists k=1,…, m such that _A[_k(_q)](𝒲(π,ψ_A)^(n-k),(𝒲(π',ψ^-1_A)^(m-k))^∨)≠ 0. Suppose π and π' are Whittaker type A[_n(_q)]- and A[_m(_q)]-modules, respectively, and π' is not exceptional for π. There exists a unique element γ(π×π',ψ) of A^× such that (∑_x∈ N_m\ G_mW([ x 0; 0 I_n-m ])W'(x))γ(π×π',ψ) = ∑_x∈ N_m\ G_m∑_y∈ M_m, n-m-1W([ 0 1 0; 0 0 I_n-m-1; x 0 y ])W'(x) for all W∈𝒲(V,ψ_A), W'∈𝒲(V',ψ_A^-1). Note that when A=k is a field, a representation π of _n(_q) is cuspidal if and only π^(n) is its only non-zero derivative, in which case there are no exceptional representations π'. Upon specializing to k = we obtain a functional equation that is more general than any appearing in the literature because π need not be cuspidal, nor even irreducible. The functional equation is known to fail without the non-exceptional hypothesis on π' (<cit.>). Our second main result is a converse theorem for new gamma factors of irreducible cuspidal k-representations. The novelty of these gamma factors is that they are valued in certain finite dimensional local k-algebras instead of k itself, which necessitates the level of generality on the coefficients that <ref> provides. These finite-dimensional k-algebras arise as the endomorphism rings of projective envelopes of generic representations. More precisely, given π' an irreducible generic k-representation of _m(_q) with projective cover P(π'), we take A=R(π') = _k[_m(_q)](P(π')) and set γ(π×π',ψ)= γ((π⊗_k R(π'))× P(π'),ψ)∈ R(π')^× , which satisfies the following converse theorem. Let π_1 and π_2 be irreducible cuspidal k-representations of _n(_q) and suppose γ(π_1×π',ψ) = γ(π_2×π',ψ) for all irreducible generic k-representations π' of _n-1(_q). Then π_1≅π_2. When (k)∤#_n(_q), P(π')=π' and R(π')=k, so γ(π×π',ψ) = γ(π×π',ψ). In particular, <ref> reduces to a finite field version of Henniart's n× (n-1) converse theorem <cit.> when k=ℂ. In <ref>, we propose an alternative “new gamma factor” for n=2 and m=1, which also specializes to the classical one for ℓ∤#_n(_q), but which is an element of the base field k and does not involve exotic k-algebras. Remarkably, it satisfies a functional equation and converse theorem for cuspidals. This method shares some similarities with <cit.>, including the fact that it does not appear to generalize beyond n=2. §.§ Future directions §.§.§ Macdonald correspondence in families In <cit.>, Macdonald established an analogue of the local Langlands correspondence for _n(_q). If ℱ is a nonarchimedean local field with residue field _q, it can be formulated as a bijection between complex irreducible representations of _n(_q) and tame inertial classes of complex representations of the Weil group of ℱ. This bijection preserves gamma factors, which (following <cit.>) Macdonald defined analogously to the Godement–Jacquet factors for representations of _n(ℱ). Later, Vignéras found a similar but more subtle bijection in the mod ℓ setting (<cit.>), but she did not consider gamma factors. More recently, a local Langlands correspondence “in families” has been established for _n(_q). If 𝒪 is a complete discrete valuation ring with residue field 𝔽_ℓ, it takes the form of an isomorphism of commutative rings B_q,n_𝒪[_n(_q)](_N_n^_n(_q)ψ_𝒪), where B_q,n is the ring of functions on a natural moduli space of tame ℓ-adically continuous 𝒪-valued inertial classes, and ψ_𝒪:N_n→𝒪^× is a nondegenerate character on the unipotent upper triangular subgroup N_n. The first approach to proving the existence of such an isomorphism was to deduce it as a consequence of the local Langlands correspondence in families for _n(ℱ) <cit.> (see also <cit.>) which, in turn, requires gamma factors, converse theorems, and the classical local Langlands correspondence for _n(ℱ), as an input. More recently, Li and Shotton found a remarkable second proof of “finite fields local Langlands in families,” which works for any reductive group G(_q) whose dual group has simply connected derived subgroup. Their proof uses purely finite fields methods (<cit.>), but they do not consider gamma factors. The present paper is a first step toward understanding how Rankin–Selberg gamma factors for _n(_q) fit into the ℓ-modular correspondence and the families correspondence. In future work, the authors plan to apply the converse theorems proved here to address the question of whether the Macdonald bijection and its mod ℓ analog are the unique sequence of bijections (one for each n) matching the Rankin–Selberg gamma factors γ(π×π',ψ) defined here with Deligne's ϵ_0 factors of the tensor product of the corresponding inertial classes of W_-representations. To show the local Langlands correspondence for _n(_q) in families preserves our new gamma factors (and is uniquely characterized by this property), one would need to establish a compatibility between Curtis homomorphisms, which were used in <cit.> and <cit.> to construct the local Langlands in families, and the Rankin–Selberg gamma factors. It seems that a multiplicativity property for our gamma factors would be needed here. §.§.§ Converse theorem for generic irreducibles Our proof of <ref> only works for cuspidals because cuspidal representations π have no exceptional representations π'. When π' is exceptional for π, the gamma factor γ(π×π',ψ) can no longer be defined using a functional equation, but in the complex setting it is traditionally defined using Bessel vectors (<cit.>). Very recently, Soudry and Zelingher <cit.> proved a multiplicativity property for γ(π×π',ψ) thus defined, and used it to deduce a converse theorem applying to all irreducible generic representations π. In future work we plan to investigate the question of whether this remains true in the mod ℓ setting, either by establishing a generalization of the Bessel vector construction and the multiplicativity property to characteristic ℓ>0, or by using the functional equation even while excluding exceptional π'. §.§.§ Jacquet's conjecture in the mod ℓ setting In the complex setting, it has been proved that π is characterized by gamma factors γ(π×π',ψ) where π' ranges over irreducible generic representations of _m(_q) for m=1,…,⌊n/2⌋ (<cit.> for cuspidal π, and <cit.> in general). It is natural to ask whether our <ref> remains true with m≤⌊n/2⌋; the answer is probably yes (c.f. <cit.> in the p-adic setting), but we do not to address this here. toc §.§ Acknowledgments This project began as part of the Rethinking Number Theory 2 workshop. The authors are deeply grateful to the organizers for the mathematical experience and the welcoming community. The workshop, as well as the continued work on this project after the workshop, was generously supported by the Number Theory Foundation and the American Institute of Mathematics. The authors are grateful to Rob Kurinczuk for several helpful conversations. J.B. was supported by National Science Foundation Grants DGE-1840990 and DGE-2234657, H.G. was supported by National Science Foundation grant DMS-2201085 and a PSC-CUNY Award, jointly funded by The Professional Staff Congress and The City University of New York, G.M. was supported by National Science Foundation Grants DMS-2001272 and DMS-2234339, R.Z. was supported by National Science Foundation Grants DGE-1644869 and DMS-2303280. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect views of the National Science Foundation. toc § PRELIMINARIES In this section we collect some basic facts about the representation theory of _n(_q). §.§ Subgroups of GL(n, Fq) Let p be a prime, q be a power of p, and _q be the field with q elements. For n a positive integer, let G_n _n(_q). Denote by _m_1,m_2(_q) the vector space of m_1× m_2 matrices over _q. The mirabolic subgroup of G_n is P_n [ g y; 0 1 ]∈ G_n : g ∈ G_n-1, y ∈_n-1,1(_q), with unipotent radical U_n [ I_n-1 y; 0 1 ] : y ∈_n-1,1(_q)≤ P_n, so that P_n = U_n ⋊ G_n-1. We also denote N_n ([ 1 * *; 0 1 ⋱ ⋮; ⋮ ⋱ ⋱ *; 0 0 1 ]) the subgroup of unipotent upper-triangular matrices in G_n. We consider a sequence of subgroups interpolating between U_n and N_n: for -1 ≤ m ≤ n-1, define U_n,k:={[ I_n-k z; 0 y ] : z∈Mat_n-k, k(𝔽_q), y∈ N_k}. Note that I_n = U_n,0, U_n = U_n,1, N_n = U_n,n-1 = U_n,n. §.§ Representations Let G be a finite group. In this article, our coefficient rings R will always be assumed to be algebras over ℤ[1/p,ζ_p]. Let _R(G) denote the category of R-linear representations of G, or, equivalently, of R[G]-modules. We warn the reader that we will often use the letter V ∈_R(G), even if V is not necessarily free as an R-module. If H ≤ G is a subgroup, the induction functor _H^G: _R(H) →_R(G) sends (π,V) to the representation _H^G(π) = f: G → V : f(hg) = π(h)f(g), h ∈ H, with its natural left G-action by right multiplication on G. Frobenius reciprocity is the statement that induction is a left-adjoint to restriction: given ρ∈_R(H) and π∈_R(G), _G(_H^Gρ, π) ≃_H(ρ, π|_H). The group ring R[G] is equipped with a natural left H-action, which makes _H^G(π) naturally isomorphic to _R[H](R[G],π) as left R[G]-modules, which some authors call “coinduction”. However the distinction is unimportant because of the isomorphism given by R[G]⊗_R[H]π _H^G(π) 1⊗ v ↦ f_v , where v is an element in the space of π and f_v is the function supported on H such that f_v(h) = π(h)v, h∈ H. In particular, induction is also a right adjoint to restriction. If N < G is a subgroup such that N is invertible in R, and ψ: N→ R^× is a character, we define a projector to the submodule π^N,ψ of elements on which N acts via ψ: π →π^N,ψ v ↦ |N|^-1∑_n∈ Nψ(n)^-1π(n)v. The kernel of this projector equals the submodule V(N,ψ) generated by {π(n)v-ψ(n)v:n∈ N, v∈ V}, so π^N,ψ is canonically isomorphic to the (N,ψ)-coinvariants π_N,ψ V/V(N,ψ). Let N be a subgroup whose cardinality is invertible in R and let ψ:N→ R^× be a character. Then (π_N,ψ)^∨≅ (π^∨)_N,ψ^-1. We make use of the fact that for a finite group G, _R[G](V,W^∨)≅_R[G](W,V^∨) (<cit.>), where (-)^∨ denotes the R-linear dual equipped with its natural G-action. Applying this for G=N we have the following identifications (π_N,ψ)^∨ def=_R(π_N,ψ,R) = _R[N](π,ψ) =_R[G](π,_N^Gψ) =_R[G](_N^Gψ^-1,π^∨) <cit.> =_R[N](ψ^-1,π^∨) =(π^∨)^N,ψ^-1 =(π^∨)_N,ψ^-1 Given a nontrivial partition n_1 + ⋯ + n_r of n, there is an associated standard parabolic subgroup P_n_1,...,n_r with Levi subgroup G_n_1×⋯× G_n_r. If σ_i is a representation of G_n_i, then the parabolic induction σ_1 ×⋯×σ_r := _P_n_1,...,n_r^G_nσ_1 ⊠⋯⊠σ_r is obtained by first inflating σ_1 ⊠⋯⊠σ_r to a representation of P_n_1,...,n_r by letting its unipotent radical act trivially, and inducing the resulting representation to G_n. The corresponding “parabolic restriction" functors are known as Jacquet functors. Given a partition as above, the functor J^G_n_P_n_1,...,n_r: _R(G_n) →_R(G_n_1×⋯× G_n_r) takes a representation (π,V) ∈_R(G) to its coinvariants under the unipotent radical of P_n_1,...,n_r. The functor J^G_n_P_n_1, ..., n_r is both left- and right-adjoint to parabolic induction. We say that (ρ,V) ∈_R(G_n) is cuspidal if its image under the Jacquet functor J^G_n_P_n_1, …, n_r is zero for every non-trivial partition. This is equivalent to asking that there are no nonzero morphisms from ρ to a parabolic induction. §.§ Multilinear forms Gamma factors are defined as the constants of proportionality between certain multilinear forms, once the spaces of such forms are shown to be one-dimensional. We define those spaces now. If G is a group, (ρ,V),(ρ',V'),(ρ”,V”) ∈_R(G) and χ: G → R^× is a character, let _G(V,V',χ) := _R[G](V ⊗_R V', χ) = {bilinear functions B:V× V'→ R | B(gv, gv')=χ(g)B(v,v')}. and let _G(V,V',V”) := _R[G](V ⊗_R V' ⊗_R V”, 1) = {G-invariant trilinear functions B:V × V' × V”→ R }. In the above definitions G_n acts diagonally on the tensor products. §.§ Derivative functors Let R be a Noetherian commutative ℤ[1/p, ζ_p]-algebra with 0 ≠ 1. Fix once and for all a nontrivial group homomorphism ψ:𝔽_q →ℤ[1/p, ζ_p]^× and denote by ψ_R its extension to R^× along the structure morphism ℤ[1/p, ζ_p]→ R. Promote ψ_R to a character of U_n (also denoted psi_R by abuse of notation) by letting ψ_R [ I_n-1 y; 0 1 ] = ψ_R(y_n-1), y = (y_1,...,y_n-1)^t. To analyze representations of the mirabolic subgroup P_n, we recall derivative functors, following Bernstein–Zelevinsky for p-adic general linear groups <cit.>. Specifically, define the functors _R(P_n-1) [r, yshift=0.7ex, "Φ^+"] [l, yshift=-0.7ex, "Φ^-"] _R(P_n) [r, yshift=-0.7ex, "Ψ^-"'] [l, yshift= 0.7ex, "Ψ^+"'] _R(G_n-1) where * Ψ^-(V) = V/V(U_n,1) where V(U_n,1) = ⟨{uv - v: u∈ U_n, v∈ V}⟩. It carries an action of G_n-1. * Ψ^+(V) = V and we inflate the G_n-1 action to a P_n action by letting U_n act trivially. * Φ^-(V) = V/V(U_n,ψ_R) where V(U_n,ψ_R) = ⟨{uv - ψ_R(u)v: u∈ U_n, v∈ V}⟩. It carries an action of P_n-1 because P_n-1 is the stabilizer in G_n-1 of the character ψ_R of U_n under the conjugation action defined by ψ_R↦ψ_R(g(-)g^-1). * Φ^+(V) = _P_n-1U_n^P_n(V⊗ψ_R) where V⊗ψ_R denotes the representation of V extended to P_n-1U_n by letting U_n act via ψ_R. Since P_n-1 is the normalizer of ψ_R this is well-defined. §.§.§ Properties of derivative functors Bernstein–Zelevinsky established some basic properties of these functors over p-adic general linear groups, and Vignéras has observed that the proofs work equally well in the case of finite general linear groups <cit.>. The properties we will need are the following: * They are all exact. * Ψ^- is left adjoint to Ψ^+ * Φ^+ is left adjoint to Φ^- and Φ^- is left adjoint to Φ^+. * Φ^-Φ^+≅𝕀 and Ψ^-Ψ^+≅𝕀 * Φ^-Ψ^+ = 0 and Ψ^-Φ^+ = 0 * There is a canonical exact sequence 0→Φ^+Φ^- →𝕀→Ψ^+Ψ^-→ 0. We note the following additional property: * All the functors commute with arbitrary base change. In other words, if R→ R' is a map of rings, then Φ^+(V⊗_RR') = Φ^+(V)⊗_RR', and the same for all the other functors. Given V∈_R(P_n), define the “k-th derivative” V^(k) = Ψ^-(Φ^-)^k-1(V), which is in _R(G_n-k). For V∈_R(G_n), V^(k) is defined to be the k-th derivative of its restriction to P_n. Finally, we define V^(0)=V for V∈_R(G_n). By successive application of property (6) above, any V∈_R(P_n) has a natural filtration by P_n-submodules: 0⊂ V_n⊂ V_n-1⊂⋯⊂ V_2⊂ V_1 = V, where V_k = (Φ^+)^k-1(Φ^-)^k-1(V). The successive quotients can be recovered from the derivatives of V as follows: V_k/V_k+1 = (Φ^+)^k-1Ψ^+(V^(k)). This indicates the following remarkable fact: every representation of P_n is “glued together from” representations of various G_m's for m<n. The next two lemmas give explicit descriptions of the derivative functors in terms of coinvariants and parabolic restriction. Let k ≤ n-1. Extend ψ_R to a character of U_n,k via the map U_n,k→ U_n,k/[U_n,k,U_n,k] ≃_q^k →_q (y_1,...,y_k) ↦ y_1+ .. + y_k, so that in input of ψ_R is the sum of all the upper-diagonal entries, n-k-1 of which are zero. Suppose (ρ,V) ∈_R(P_n). Then (Φ^-)^k V ≃ V_U_n,k,ψ_R, the space of (U_n,k,ψ_R)-coinvariants. In particular, the n-th derivative V^(n)≃ V_N_n,ψ_R. Recall that for a subgroup H of G_n, V_H,ψ_R = V/V(H,ψ_R):= V/⟨{ψ_R(h)v-hv, H ∈ N_n, v ∈ V}⟩. We argue by induction on k. By definition Φ^- V = V_U_n,ψ_R = V_U_n,1,ψ_R. Next, (Φ^-)^kV = (V_U_n,k-1,ψ_R)_U_n-k,1 where U_n-k,1 is embedded in the upper-left diagonal block, so it suffices to show that V(U_n,k,ψ_R) = V(U_n,k-1,ψ_R) ⊕ V(U_n-k,1,ψ_R). Since U_n,k-1 and U_n-k,1 are subgroups of U_n,k, the ⊃ inclusion is immediate. For the reverse inclusion, observe that U_n,k = U_n,k-1⋊ U_n-k,1 and that U_n-k,1 centralizes ψ_R: U_n,k-1→ R. So for u ∈ U with u=xy for x ∈ U_n,k-1, y ∈ U_n-k,1, and v ∈ V we have ψ_R(xy)v-(xy)v = ψ_R(xy)v-ψ_R(x)yv + ψ_R(x)yv - (xy) v = ψ_R(yx)v-yψ_R(x)v + ψ_R(x)yv - (xy) v ∈ V(U_n,k-1ψ_R) ⊕ V(U_n-k,1ψ_R), which provides the reverse inclusion. The last statement follows from the definition of derivatives, since N_n = U_n,n-1 and Ψ^-:(P_1) →(G_0) is the identity The k-th derivative functor π↦π^(k) is the composite of parabolic restriction J^G_n_P_n-k,k from R[G_n]-modules to R[G_n-k× G_k]-modules with the top derivative from R[G_k]-modules to R-modules. To emphasize the dependence on ψ_R let us write π^(k,ψ_R) = π^(k). In this notation, we have Let π be an R[G_n]-module, and let 1≤ k≤ n. We have (π^(k,ψ_R))^∨≅ (π^∨)^(k,ψ_R^-1). When k=n this follows from <ref> and <ref>. When k<n, <ref> combined with <ref> shows that it suffices to prove the parabolic restriction functor commutes with duals. However since parabolic restriction is both left and right adjoint to parabolic induction, and parabolic induction commutes with duals (<cit.>), it follows that parabolic restriction commutes with duals. The following is a characterization of cuspidal restrictions in terms of Bernstein–Zelevinsky derivatives. Let k be a ℤ[1/p,ζ_p]-algebra which is a field. An irreducible k-representation V of G_n is cuspidal if and only if V^(n) is one-dimensional and V^(i)=0 for i=1,…,n-1. Finally, we state some basic facts about how the spaces of bilinear forms interact with some of the Bernstein–Zelevinsky functors . _P_n+1(Ψ^+(V),Ψ^+(V'), χ) ≅_G_n(V,V',χ) _P_n+1(Φ^+(V),Φ^+(V'),χ) ≅_P_n(V,V',χ) _P_n+1(Ψ^+(V),Φ^+(V'),χ) =0 In each statement above, V and V' are arbitrary representations living in the appropriate category. This follows from <cit.> and the adjunctions in <ref>. Let V ∈_R(P_n) and V' ∈_R(G_n). Then _G_n(Φ^+V,V',1)≅_P_n(V,V',1). The proof is the same as in <cit.> or <cit.>. §.§ Whittaker models Recall that we fixed a nontrivial character ψ: _q →[1/p,ζ_p]^× and its extension ψ_R: _q → R^× in <ref>. The Whittaker space for G_n, or Gelfand–Graev representation of G_n, is (ψ_R) := _N_n^G_nψ_R where ψ_R is viewed as a character of N_n via the map N_n → N_n/[N_n,N_n] (_q)^⊕ n-1→_q (y_1, …, y_n-1) ↦ y_1 + ⋯ +y_n-1. Since we defined ψ over the base ring, (ψ_R) does not depend on the choice of ψ. See <cit.> for a discussion of this. We say that (ρ,V) ∈_R(G_n) is of ψ-Whittaker type (or just Whittaker type) if the n-th derivative V^(n) is a free R-module of rank 1. We will sometimes call an irreducible representation of ψ-Whittaker type ψ-generic or generic. Without the irreducibility assumption, there is a distinction between Whittaker type and generic, as described in the next definition. By Frobenius reciprocity and <ref> there is an isomorphism _R(V^(n), R) _R[G_n](V, (ψ_R)). Suppose (ρ,V) is of ψ-Whittaker type. Then the choice of a generator of _R(V^(n),R) gives a map V →(ψ_R). * The image of V →(ψ_R) is denoted (V, ψ_R) and is called the ψ-Whittaker model (or just Whittaker model) of V. Note the image does not depend on the choice of generator. * We say that V is essentially ψ-generic if the map V →(ψ_R) is injective. In this case V and (V, ψ_R) are isomorphic as R[G_n]-modules. Let R→ R' be a homomorphism of rings. If (ρ,V) is of ψ-Whittaker type, so is (ρ⊗_RR', V⊗_RR') and 𝒲(V⊗_RR',ψ_R') = 𝒲(V,ψ_R)⊗_RR'. Since |N| is invertible in R and R', it follows from the existence of the projector in <ref> that (V⊗_RR')^N,ψ_R' = (V^N,ψ_R)⊗_RR' and hence also (V⊗_RR')_N,ψ_R' = (V_N,ψ_R)⊗_RR'. This proves that V⊗_RR' is also of Whittaker type. Next, if λ is a generator of the rank-one R-module (V_N,ψ_R)^∨, the Whittaker model of V is V →𝒲(V,ψ_R) v ↦ W_v where W_v(g) = λ(gv). In particular, λ⊗ 1 is generator of ((V⊗_RR')_N,ψ_R')^∨ and the Whittaker model of V⊗_RR' is given by W_v⊗ 1(g) = (λ⊗ 1)(gv) = λ(gv)⊗ 1 = W_v(g)⊗ 1. In particular, 𝒲(V⊗_RR',ψ_R')=𝒲(V,ψ_R)⊗_RR'. The following Lemma is sometimes described as the existence of so-called “Bessel vectors.” If (ρ,V)∈_R(G_n) is of ψ-Whittaker type, the map (V,ψ_R) →_N_n^P_nψ_R W ↦ W|_P_n is surjective. Denote (W,ψ_R) by . We will exhibit a subspace of that maps isomorphically to _N_n^P_nψ_R under this map, namely it is the bottom step (Φ^+)^n-1Ψ^+(^(n)) of the filtration in <ref> applied to . By <ref> and <ref>, the natural quotient map →^(n) v ↦v̅ maps ^N_n,ψ_R isomorphically onto ^(n). We view ^(n) as the trivial representation of G_0 = {1}, the definition of Φ^+ and transitivity of induction identifies _N_n^P_nψ_R ≅ (Φ^+)^n-1Ψ^+(^(n)). The inclusion (Φ^+)^n-1Ψ^+(^(n)) ↪ coming from <ref> corresponds to the aforementioned isomorphism ^(n)≅^N_n,ψ_R under the following adjunctions: _R(^(n),^N_n,ψ_R) ≅_R[N_n](ψ_^(n),) ≅_R[P_n](_N_n^P_nψ_^(n),). Let us be explicit. If v is an R-generator of ^N_n,ψ_R, the function f_v̅ supported on N_n such that f_v̅(n) = ψ_R(n)v̅, n∈ N_n, is a generator of _N_n^P_nψ_^(n). The inclusion _N_n^P_nψ_^(n)↪ sends f_v̅ to v. As is a subset of _N_n^G_nψ_R, we will view elements of as functions on G_n. In this context, the value w(g) of an element w∈ is the element of R corresponding to gv in our fixed isomorphism ^(n)≅ R. Since our generator v of ^N_n,ψ_R satisfies nv = ψ_R(n)v for n∈ N_n, it follows that for g∈ G_n-1, ([ I_n-1 u; 0 1 ])∈ U_n, we have ψ_R([ I_n-1 u; 0 1 ])v([ g 0; 0 1 ]) = ψ_R([ I_n-1 g^-1u; 0 1 ])v([ g 0; 0 1 ]). Since P_n-1 is the stabilizer of ψ_R in G_n-1, it follows that the support of v|_G_n-1 is contained in P_n-1. But the same argument with g∈ G_n-2 and u∈ U_n-1 shows that v|_G_n-2 is supported on P_n-2. Repeating this, we conclude that the restriction of v to P_n is supported only on N_n. Since the values of v and f_v̅ agree on N_n by construction, we conclude that v|_P_n = f_v̅. Since f_v̅ is a generator of _N_n^P_nψ_R, we conclude. Let W: G_n → R be an element of (V,ψ_R) and let W be the function defined by W(g) = W(w_n (^ι g)), where w_n is defined to be the antidiagonal matrix in G_n with 1's along the antidiagonal, and ^ιg := ^tg^-1. Then W(ng) = W(w_n(^ιn) (^ιg)) = ψ_R^-1(n)W(w_n(^ιg)) = ψ_R^-1(n)W(g) for all n ∈ N_n and so defines an element of (^ιV,ψ_R^-1), where ^ιV denotes the representation given by precomposing V with the involution ^ι. §.§ Exceptional representations Later when defining gamma factors for pairs of representations we will need to exclude certain exceptional pairs. The term “exceptional” follows <cit.>, which studies representations of _2(_q) on -vector spaces and defines the notion of exceptional for characters. Our definition is a higher dimensional generalization of op. cit. If (π,V) ∈_R(G_n) and (π',V') ∈_R(G_m) we say that (V,V') is an exceptional pair, or that V' is exceptional for V (or vice versa) if there exists an integer t ∈{1,…,min(m,n) } such that _G_t((V,ψ_R)^(n-t), (V',ψ_R^-1)^(m-t), 1)≠{0}. We remark that the notion of exceptional pair only depends on the Whittaker models of the representations. § FUNCTIONAL EQUATION Fix (π,V') ∈_R(G_n) and (π',V') ∈_R(G_m) both of Whittaker type. Assume that π' is not exceptional for π. In this section we construct a gamma factor γ(π×π', ψ_R) for the pair (π,π'). Since this will only depend on the Whittaker models, we make the following abbreviations to ease the notation in this section: := (V,ψ_R) and ' := (V',ψ_R^-1). §.§ Gamma factor and functional equation when n>m We first suppose n > m; the n = m case is slightly different, so we address it afterwards. Recall the subgroup U_n, n-m-1:={([ I_m+1 z; y ]) : z∈Mat_m+1, n-m-1(𝔽_q), y∈ N_n-m-1}. Inflate ' to an R[G_mU_n,n-m-1]-module by letting U_n,n-m-1 act trivially. Consider the following finite field analogue of the integral defined in <cit.>. If W: G_n → R and W': G_m → R are two functions and j ∈0,…,n-m-1 then let I(W,W';j) := ∑_g∈ N_m\ G_m∑_y∈_j× mW[ g 0 0; y I_j 0; 0 0 I_n-m-j ]W'(g). If we let w_n,m:=[ I_m ; w_n-m ] then a direct computation shows that the maps (W,W') ↦ I(W,W';0) = ∑_g∈ N_m\ G_mW([ g 0; 0 I_n-m ])W'(g) (W,W') ↦ I(w_n,mW,W';n-m-1) = ∑_g∈ N_m\ G_m∑_y∈_n-m-1 × mW([ 0 1 0; 0 0 I_n-m-1; g 0 y ])W'(g) define elements of _G_mU_n,n-m-1(, ',1⊗ψ_R), where 1⊗ψ_R is the character acting trivially on G_m and by ψ_R on U_n,n-m-1. In this section we use the calculus of the Bernstein–Zelevinsky functors to analyze this space of bilinear forms. Our main result is the following. The space _G_mU_n,n-m-1(, ',1⊗ψ_R) is free of rank one over R generated by I(W,W';0). As a corollary, we deduce the functional equation which defines the gamma factor γ(π×π', ψ_R). There exists a unique element γ(π×π', ψ_R) ∈ R such that I(W,W';0)γ(π×π', ψ_R) = I(w_n,mW,W';n-m-1) for all W ∈ and W' ∈'. In the next section we prove a more general functional equation and use it to deduce that in fact γ(π×π', ψ_R) ∈ R^×, see <ref>. If f:R→ R' be a ring homomorphism, then f(γ(π×π',ψ_R)) = γ(π⊗_RR'×π'⊗_RR',ψ_R'). By applying f to both sides of the functional equation in <ref> and using <ref>, we find that f(γ(π⊗π',ψ_R)) satisfies the same functional equation as γ(π⊗_RR'×π⊗_RR',ψ_R'). Therefore the uniqueness in <ref> implies they are equal. Note that if V is cuspidal, there are no representations that are exceptional for V. Thus, in this case, we recover the functional equation in the special cases treated in <cit.>. The rest of this section is devoted to the proof of <ref>. Our strategy follows that of <cit.> and <cit.> in the setting of p-adic groups but there is a key lemma in the p-adic setting which completely fails in the setting of finite groups for lack of unramified characters, namely <cit.>. This failure is precisely what necessitates the exclusion of the exceptional representations for V in <ref>. Without the exclusion of exceptional characters the theorem is false, c.f. <cit.>. Our main tool will be the properties of the Bernstein–Zelevinsky functors established in <ref> and <ref>. The proof of <ref> proceeds by several reductions steps, which we state as lemmas. There is a canonical isomorphism _G_mU_n,n-m-1(,',1⊗ψ_R) _G_m((Φ^-)^n-m-1, ', 1). By definition, _G_mU_n,n-m-1(,',1⊗ψ_R) is _R[G_mU_n,n-m-1](⊗',1⊗ψ_R), where G_mU_n,n-m-1 acts diagonally on ⊗'. But any such homomorphism must factor through τ⊗' where τ is the quotient of by the submodule generated by elements of the form uW - ψ_R(u)W for u∈ U_n,n-m-1, W∈. Moreover, this quotient is universal for this property, so <ref> is isomorphic to _R[G_m](τ⊗',1). Now the result follows from the fact that τ = (Φ^-)^n-m-1, see <ref>. We now consider the Bernstein–Zelevinsky filtration of given by <ref>. After applying (Φ^-)^n-m-1 to the filtration we have 0⊂ (Φ^-)^n-m-1_n⊂⋯⊂ (Φ^-)^n-m-1_1 = (Φ^-)^n-m-1, which is now a filtration of representations of P_m+1. Following <ref> and exactness of Φ^-, the successive quotients are given by (Φ^-)^n-m-1(_k/_k+1) = (Φ^-)^n-m-1(Φ^+)^k-1Ψ^+(^(k)). Note that since ^(n) = V^(n) = 1 by assumption, the identity Φ^-Φ^+≃𝕀 implies that the bottom step of the filtration is the submodule (Φ^-)^n-m-1(Φ^+)^n-1Ψ^+(1) = (Φ^+)^mΨ^+(1)⊂ (Φ^-)^n-m-1. The restriction map _G_m((Φ^-)^n-m-1,',1) →_G_m((Φ^+)^mΨ^+(1), ',1) B ↦B(Φ^+)^mΨ^+(1)×' is injective. If B|_(Φ^+)^mΨ^+(1)×' = 0, it defines a bilinear form on the next quotient (Φ^-)^n-m-1(Φ^+)^n-2Ψ^+(^(n-1))×'. In fact, we will show that the spaces of bilinear forms on each successive quotient, _G_m((Φ^-)^n-m-1(Φ^+)^iΨ^+(^(i+1)), ',1), are identically zero for i=0,…, n-2. We will consider three cases. Case 1: i <n-m-1. The module (Φ^-)^n-m-1(Φ^+)^iΨ^+(^(i+1)) is zero since Φ^-Ψ^+≃ 0, see <ref>. Case 2: i= n-m-1. We have _G_m((Φ^-)^n-m-1(Φ^+)^iΨ^+(^(i+1)), ',1) = _G_m(Ψ^+(^(n-m)), ',1) = _G_m(^(n-m), ', 1) = {0}, where the last equality is from the non-exceptional assumption. Case 3: i>n-m-1. In this case, we are considering the space _G_m((Φ^+)^i-(n-m-1)Ψ^+(^(i+1)), ',1). To keep things tidy, we introduce a new index: t := n-i-1, so that i-(n-m-1) = m - t i+1 = n-t. Because n-m≤ i≤ n-2 in the present case, the range of t is 1≤ t ≤ m-1. Our goal is to prove _G_m((Φ^+)^m-tΨ^+(^(n-t)), ',1)={0}. First, we can restrict to P_m following <ref>, _G_m((Φ^+)^m-tΨ^+(^(n-t)), ',1)=_P_m((Φ^+)^m-t-1Ψ^+(^(n-t)),',1). As a representation of P_m, we filter ' using <ref>: the successive quotients in the filtration are (Φ^+)^m-t'-1Ψ^+((')^(m-t')) with 0≤ t'≤ m. At the bottom of the filtration, where t'=0, our bilinear forms restrict to elements of _P_m((Φ^+)^m-t-1Ψ^+(^(n-t)),(Φ^+)^m-1Ψ^+((')^(m)),1), which equals zero by <ref> and <ref> since t>0. Similarly, when a bilinear form is restricted to any step in the filtration where t≠ t', the same argument gives _P_m((Φ^+)^m-t-1Ψ^+(^(n-t)),(Φ^+)^m-t'-1Ψ^+((')^(m-t')),1)={0}. Thus it remains only to treat the case where t = t', where _P_m((Φ^+)^m-t-1Ψ^+(^(n-t)),(Φ^+)^m-t-1Ψ^+((')^(m-t)),1)={0}, by the assumption that V' is non-exceptional for V. _P_m((Φ^+)^m-1Ψ^+(1), ', 1) ↪_P_m((Φ^+)^m-1Ψ^+(1),(Φ^+)^m-1Ψ^+(1),1) = _G_0(1,1,1) = R First, we note that the second isomorphism is given by properties <ref> and <ref> of the Bernstein–Zelevinsky functors, and the third isomorphism is trivial. Next, we will consider the isomorphism on the first line. Consider the filtration of ' as in <ref>. From <ref>, the bottom step of the filtration is (Φ^+)^m-1Ψ^+(1). The first isomorphism in the lemma is given by restricting a bilinear form B to this bottom step in the second factor. We will prove that this restriction map is injective. Assume a bilinear form is zero when restricted to (Φ^+)^m-1Ψ^+(1)× (Φ^+)^m-1Ψ^+(1). Then it defines a bilinear form in _P_m((Φ^+)^m-1Ψ^+(1),(Φ^+)^i-1Ψ^+((')^(i)),1) for an integer i<m. But this space is zero by the same argument as in the proof of <ref>, thanks to properties <ref> and <ref> of the Bernstein–Zelevinsky functors. Hence B=0, and the injectivity is proved. Finally, we use the following fact to put everything together. Suppose A is a commutative ring, M is a finitely generated A-module and N ⊂ M is an A-submodule. Then any surjection f: N ↠ M is an isomorphism. The above three lemmas give us an injection _G_m U_n,n-m-1(, ', 1⊗ψ_R) ↪ R By <ref> it suffices to find W ∈ and W' ∈' such that I(W,W';0) = 1, because then the evaluation map _W,W': _G_mU_n,n-m-1(,',1⊗ψ_R) ↠ R is surjective and sends I(W,W';0) to a unit. But since the map 𝒲' → R W' ↦ W'(1) factors as two surjective maps '→'_N_m,ψ_R^-1 R, there always exists W'∈' such that W'(1) = 1. Given an arbitrary element ϕ of _N_n^P_nψ_R, <ref> tells us there exists W in such that W|_P_n=ϕ. Note that when we evaluate the sum defining I(W,W';0) we only ever evaluate W on elements of P_n, so choosing ϕ so it is supported only on N_n and such that ϕ(1)=1, we find I(W,W';0)=1. Note that if R is a field, this final surjectivity argument is unnecessary because any nonzero bilinear form (e.g. I) will provide a basis vector. §.§ More general functional equation when n>m In this subsection we use <ref> to deduce a slightly more general functional equation for the gamma factor. First we introduce some notation. Assume the same notation from the previous section. Let j be an integer, 0≤ j≤ n-m-1. In the same setup as <ref>, we have I(W,W';j)γ(π×π',ψ_R) = I(w_n,mW,W';k), where k = n-m-1-j. The same argument as in <cit.> works here. In the same setup as <ref>, the element γ(π×π',ψ_R) is invertible in R. One approach would be to prove that I(w_n,mW',W';n-m-1) is also a generator of _G_mU_n,n-m-1(, ',1⊗ψ_R), but we will instead use <ref>. Since w_n,mW defines an element of 𝒲(^ι V,ψ_R^-1), the functional equation gives I(W,W';0)γ(π×π',ψ_R)γ(^ιπ×^ιπ',ψ_R^-1) = I(w_n,mW,W';n-m-1)γ(^ιπ×^ιπ',ψ_R^-1) = I(w_n,mw_n,mW,W';0) = I(W, W';0) . Thus it's enough to show the existence of W and W' such that I(W,W';0) = 1, which is done in the proof of <ref>. §.§ Gamma factor and functional equation when n=m Now we address the case when n = m. Let C(_q^n,R) denote the set of all functions Φ: _q^n → R. Since G_n naturally acts (on the right) on _q^n, the set C(_q^n,R) acquires an R-linear left G_n-action by setting (g · f)(x) = f(x · g). The R-subspace C_0(_q^n,R) = f ∈ C(_q^n,R) : f(0,…,0) = 0 is G_n-stable. In order to formulate a functional equation, we define trilinear forms instead of bilinear forms to take into account the functions in C(_q^n,R). If W, W': G_n → R are two functions and Φ∈ C(_q^n, R) then let I(W,W',Φ) := ∑_g ∈ N_n \ G_n W(g) W'(g) Φ(η g) where η = [ 0 ⋯ 0 1 ]. For Φ∈ C(_q^n, R) let Φ∈ C(_q^n,R) denote the Fourier transform Φ(a) = ∑_x ∈_q^nΦ(x)ψ_R(a · x). The maps (W,W',Φ) ↦ I(W,W',Φ) (W,W',Φ) ↦ I(W,W',Φ) define elements of _G_n(, ', C(_q^n,R)). If (V,V') is not an exceptional pair then _G_n(, ',C(_q^n,R)) is a free R-module of rank 1 generated by I(W,W',Φ). We closely follow <cit.>. The G_n-equivariant exact sequence of R-modules 0 → C_0(_q^n,R) → C(_q^n,R) →1→ 0 consists entirely of free finite rank R-modules and thus splits, so 0 →⊗_R ' ⊗_R C_0(_q^n,R) →⊗_R ' ⊗_R C(_q^n,R) →⊗_R ' → 0 is still a G_n-equivariant exact sequence. Since (V,V') is not an exceptional pair we see that _G_n(, ', 1) = 0. So in view of the above sequence and the left-exactness of the Hom functor, we see that _G_n(, ', C(_q^n,R)) injects into _G_n(, ', C_0(_q^n, R)). Note that C_0(_q^n,R) is isomorphic as a G_n-representation to _P_n^G_n1 because the orbit of the vector η = (0,…,0,1) under the standard right action of G_n on _q^n is _q^n - (0,…,0) and the stabilizer is P_n. Therefore, _G_n(, ', C_0(_q^n,R)) = _G_n(⊗_R ' ⊗_R _P_n^G_n1, 1) = _R[G_n](⊗_R ', (_P_n^G_n1)^∨) = _R[G_n](⊗_R ', _P_n^G_n1) = _P_n(, ', 1). Recall from above that admits a filtration of length n by P_n-subrepresentations with successive quotients isomorphic to (Φ^+)^k-1Ψ^+(^(k)) for k = 1,…,n, and the same is true for '. But in view of <ref> _P_n((Φ^+)^k-1Ψ^+(^(k)), (Φ^+)^j-1Ψ^+((')^(j)), 1) is zero unless k = j, in which case it's equal to _P_n((Φ^+)^k-1Ψ^+(^(k)), (Φ^+)^k-1Ψ^+((')^(k)), 1) = _G_n-k(^(k), (')^(k), 1). But (V,V') is not an exceptional pair, so this vanishes for k = 1, …, n-1. The only surviving piece, then, is when k = j = n and so using <ref> we see that there is an injection _P_n(, ', 1) ↪_P_n((Φ^+)^n-1Ψ^+(1), (Φ^+)^n-1Ψ^+(1),1) = _R(1,1,1) = R We have therefore found an R-module injection _G_n(, ', C(_q^n, R)) ↪ R By <ref> it suffices to find W ∈ and W' ∈' such that I(W,W',δ_η) = 1, because then the evaluation map _W,W',δ_η: _G_n(,',C(_q^n,R)) ↠ R is surjective and sends I(W,W',Φ) to a unit. As in the proof of <ref> we can pick Whittaker functions W ∈ and W' ∈' such that W(1) = 1, the restriction W|_P_n is supported on N_n, and W'(1) = 1. Then I(W,W',δ_η) = 1. There exists a unique element γ(π×π', ψ_R) of R^× such that I(W,W',Φ)γ(π×π', ψ_R) = I(W,W',Φ) for all W ∈ and W' ∈'. <ref> shows that there exists such a γ(π×π',ψ_R) ∈ R, so we need to show that it's a unit. As in <ref>, we have I(W,W',Φ)γ(π×π',ψ_R)γ(^ιπ×^ιπ', ψ_R^-1) = I(W,W',Φ)γ(^ιπ×^ιπ', ψ_R^-1) = I(W,W',Φ) = I(W,W',Φ) and the proof of <ref> gives us W,W',Φ such that I(W,W',Φ) = 1. § CONVERSE THEOREM Let k = 𝔽_ℓ. In this section we prove a converse theorem for cuspidal k-representions, in which gamma factors take values in Artinian k-algebras. §.§ Projective envelopes Recall that N_n denotes the subgroup of unipotent upper triangular matrices. Since the order of N_n is relatively prime to ℓ, the character ψ_k: N_n → k^× is a projective k[N_n]-module. Since _N_n^G_n is left-adjoint to an exact functor, it takes projective objects to projective objects and therefore _N_n^G_nψ_k is a projective k[G_n]-module. We can then decompose _N_n^G_nψ_k as a direct sum _N_n^G_nψ_k = P_1^⊕ e_1⊕…⊕ P_r^⊕ e_r, where each P_i is indecomposable and projective, and P_i≇P_j for i≠ j. However, we know, see <cit.>, that _G_n(_N_n^G_nψ_k) is a commutative ring, so e_i=1 for all i. The commutativity of _G_n(_N_n^G_nψ_k) also implies that _G_n(P_i,P_j) = 0. There is a bijection <cit.> between isomorphism classes of irreducible representations of G_n and isomorphism classes of indecomposable projective k[G_n]-modules: {irreducible k[G_n]-modules} ↔{indecomposable projective k[G_n]-modules} π ↦ P(π) (P) P, where P(π) denotes the projective envelope of π and (P) denotes the socle (i.e. the largest semisimple subrepresentation) of P. Note also that, by duality, π also occurs as a quotient of P(π) and is in fact the only irreducible quotient of P(π) <cit.>. In other words, π is not only the socle of P(π) but also its cosocle (i.e. the largest semisimple quotient). Since P_i is not isomorphic to P_j for i≠ j, the bijectivity above implies that (P_i) is not isomorphic to (P_j). On the other hand, being contained in _N_n^G_nψ_k, each (P_i) is irreducible and generic, and every irreducible generic representation must occur as a (the) submodule of some P_i. Thus in restricting to generic objects we have a bijection of isomorphism classes: {irreducible generick[G_n]-modules} ↔{P_1,P_2,…,P_r} π ↦ P(π) (P) P, Let P := P(π) for an irreducible generic representation π. Since P is indecomposable projective, R(π):=_k[G_n](P) is a local ring. R(π) is commutative because it is contained in _k[G_n](_N_n^G_nψ). Note that since P is finite-dimensional over k, the ring R(π) is a finite-dimensional k-algebra. §.§ Duality and derivative of P(pi) Recall that we let ^ι g := ^t g^-1 and that for any representation (π,V) of G_n, we let (^ιπ, V) denote the representation ^ιπ(g)v := π(^ι g)v. If π: G_n →Aut(V) is an irreducible representation, one has ^ιπ≅π^∨ where π^∨ denotes the dual to π. Following <cit.>, it suffices to show that π^∨ and ^ιπ have the same Brauer character. Let g ∈ G_n have order coprime to ℓ. Then π^∨(g) = ^tπ(g^-1) = π(g^-1) = π(^ιg) = ^ιπ(g) since every matrix in G_n is conjugate to its transpose. Now let us consider the representation ^ι(_N_n^G_nψ_k). Denote by w the antidiagonal matrix with 1's along the antidiagonal, and note that for u ∈ N_n, ψ_k(w(^ι u)w^-1) = ψ_k^-1(u). Let W:G→ k be an element of _N_n^G_nψ_k, and let W be the function defined by W(g) = W(w (^ι g)). This function defines an element of _N_n^G_nψ_k^-1 since for u ∈ N_n we have W(ug) = W(w^ι u^ι g) = ψ_k^-1(u)W(w^ι g) = ψ_k^-1(u)W(g). For h∈ G, the map _N_n^G_nψ_k →_N_n^G_nψ_k^-1 W ↦W satisfies (hW) = ^ι hW, so the map is a G-equivariant isomorphism when the target is equipped with the G-action obtained by composing the right-translation action with the involution g↦^ι g. Recall the notation from <ref>: for R a k-algebra and P an R[G_n]-module: P^(n) = P_N_n,ψ_R := P/P(N_n,ψ_R), where P(N_n,ψ_R) is the R-module generated by uv-ψ_R(u)v, u∈ N_n, v∈ P. Thus P^(n) is the (N_n,ψ_R)-coinvariants, i.e. the largest quotient in the category of R[N_n]-modules on which N_n acts via the character ψ_R. Note that P(N_n,ψ_k) is equal to the k-vector space generated by the set {uv-ψ_k(u)v:u∈ N_n, v∈ P}, so P^(n) is also the largest quotient on which N_n acts via ψ_k in the category of k[N_n]-modules. Let π be an irreducible generic representation and let P = P(π) be its corresponding indecomposable projective module, considered as a module over the ring R =R(π)= _k[G](P). Then P^(n) = P^(n,ψ_k) = P^(n,ψ_R) is free of rank one as an R-module. By <ref>, P^(n) is canonically isomorphic to the k-subspace P^N_n,ψ_k consisting of elements on which N_n acts via ψ_k. Thus we get the following string of k-isomorphisms: P^(n) ≅_k[N_n](ψ_k,P) ≅_k[G_n](_N_n^Gψ_k,P) ≅_k[G_n](P) = R(π) The first isomorphism is the defining property of (N_n,ψ_k)-invariants, and takes an element v∈ P^N_n,ψ_k to the map ψ_k → P defined by sending 1 to v. The second isomorphism is Frobenius reciprocity (<ref>). The third isomorphism follows from multiplicity-freeness of _N_n^G_nψ_k. The ring R(π) acts on P by definition, and this action preserves P^(n). It acts on each of the above spaces by composition, and each of the above isomorphisms is R-linear. §.§ Whittaker model of P(pi) We first note that P=P(π) is of ψ-Whittaker type: from <ref>, P^(n) is an R(π) module of rank 1, so that _R(π)(P^(n), R(π)) = R(π). This allows us to consider the Whittaker model of P(π) in _N_n^G_nψ_R(π), which entails choosing an element η∈_R(π)(P^(n), R(π)) corresponding to a unit in R(π). If we identify P(π)^(n)≅ R(π) under the isomorphism from <ref>, and thus identify _R(π)(P^(n), R(π))≅_R(π)(R(π),R(π)) = R(π), we might as well choose η corresponding to correspond to the identity under this identification. The Whittaker model 𝒲(P(π),ψ_R(π)) is then, by definition, the image of the map P(π) →_N_n^G_nψ_R(π) f ↦ W_f defined as follows. If λ: P(π) → P(π)^(n)≅ R(π) denotes the natural quotient map, Frobenius reciprocity gives the formula W_f(g):= λ(g f), g∈ G. Next, we will compute a natural section of the above map from P(π) to its Whittaker model. There is a canonical map of k[N_n]-modules _N_n^G_nψ_k→ψ_k given by evaluation at the identity. For each irreducible generic representation π, we can restrict this to a map P(π)→ψ, which must factor through the (N_n,ψ)-coinvariants to give a map θ:R(π)≅ P(π)^(n)→ k of k-vector spaces. In other words, for f an element of _N_n^G_nψ_k that lives in P(π), we have θ(λ(f)) = f(1). Let W_f be the R(π)-valued Whittaker function of f in 𝒲(P(π),ψ_R(π)^-1). We have (θ∘ W_f)(g) = θ(λ(g f)) = (g f)(1) = f(g), g∈ G_n Thus a section of our chosen map P(π)→_N_n^G_nψ_R(π) is given by composing with θ. We record these observations in the following corollary. The representation P = P(π) is of ψ-Whittaker type and essentially ψ-generic, i.e., embeds in its Whittaker model. §.§.§ Alternative view of the Whittaker model of P(pi) In this subsection we attempt to illustrate why 𝒲(P(π),ψ_R(π)) is more useful that P(π) itself: it sees the natural action of R(π) on P(π) in both its G_n-structure and its R(π)-structure coming from multiplying R(π)-valued functions by elements of R(π). We will not use these results in the rest of the paper, but include them to give a more conceptual understanding of the map θ from the previous subsection. By extending scalars along the natural inclusion k⊂ R(π) we get an embedding _N_n^G_nψ_k ↪_N_n^G_nψ_R(π), which restricts to an inclusion on the summand P = P(π) P↪ P⊗_kR(π). The module P⊗_k R(π) has two distinct R(π)-module structures, both of which commute with the G_n-action, namely the one defined on simple tensors by ϕ*(f⊗ϕ') = ϕ(f)⊗ϕ', and the one defined by ϕ·(f⊗ϕ') = f⊗ (ϕ·ϕ'). There is a natural projection of k[G_n]-modules ϖ: P⊗_k R(π)→ P⊗_R(π)R(π)≅ P given by taking the quotient by the k-subspace (ϖ) generated by tensors of the form f⊗ϕ -ϕ(f)⊗ 1. Since P is a projective k[G_n]-module, ϖ is a split surjection, and there exists a section η:P→ P⊗_k R(π) giving a decomposition into a direct sum of k[G_n]-modules P⊗_kR(π) = η(P)⊕(ϖ). However, by commutativity of R(π) we have ϕ*(f⊗ϕ' - ϕ'(f)⊗ 1) = ϕ(f)⊗ϕ' - ϕ'(ϕ(f))⊗ 1 ϕ· (f⊗ϕ'-ϕ'(f)⊗ 1) = (f⊗ϕϕ' - ϕϕ'(f)⊗ 1) - (ϕ'(f)⊗ϕ - ϕ(ϕ'(f))⊗ 1), so (η) is stable under R(π) for both actions, and thus so is η(P). We conclude the above splitting is in fact a splitting of R(π)[G_n]-modules for both R(π) actions. Furthermore, given f∈ P, we must have that ϕ*η(f) - ϕ·η(f) is an element of (ϖ)∩η(P) = {0}, which shows ϕ*η(f) = ϕ·η(f). We conclude that each splitting η gives rise to a Whittaker model P ↪_N_n^G_nψ_R(π), for which ϖ is a canonical section, and whose image lands in the subset {W∈ P⊗_kR(π) : ϕ* W = ϕ· W , ϕ∈ R(π)}. From this perspective, the relation θ∘ W_f = f of the previous subsection amounts to the fact that the composite map P ↪ P⊗_k R(π) P is equivalent to the identity. §.§ Definition of the new gamma factors Given irreducible generic k-representations ρ of G_n and π of G_m, we will define a modified gamma factor γ̃(ρ×π,ψ) as follows. Let ρ_R(π) denote the extension of scalars ρ⊗_kR(π) along the structure morphism k↪ R(π). Now since (ρ_R(π))^(n) = ρ^(n)⊗_k R(π) ≅ k⊗_k R(π)≅ R(π), and P(π)^(m) is free of rank one over R(π) by <ref>, we may apply <ref> to the R(π)[G_n]-module ρ_R(π) and the R(π)[G_m]-module P(π): γ̃(ρ×π,ψ) := γ(ρ_R(π)× P(π),ψ)∈ R(π)^×. §.§ Completeness of Whittaker models To prove the converse theorem we need a so-called “L^2-completeness of Whittaker models” statement. The point of passing to R(π) coefficients instead of k coefficients is to recover such a completeness statement. <ref> discusses counterexamples to the converse theorem for k-valued gamma factors: they arise because of the failure of completeness of Whittaker models (which for G_1 reduces to the dual to linear independence of characters.) Fix an irreducible generic k-representation π of G_n. Let H be an element of _N_n^G_nψ_k. If ∑_x∈N_n\ G_nH(x)W(x)=0 for every W∈𝒲(P(π),ψ^-1_R(π)), for every irreducible generic representation π of G_n, then H is identically zero. By replacing ψ with ψ^-1 in <ref> we can make a choice of isomorphism P(π)^(n,ψ^-1)≅ R(π) for each irreducible ψ^-1-generic representation π to get a Whittaker model P(π) →_N_n^G_nψ^-1_R(π) f ↦ W_f. Recall from <ref> there is a map θ:R(π)≅ P^(n,ψ^-1)→ k arising from f↦ f(1) such that f = θ∘ W_f. Thus for every such f∈_N_n^G_nψ_k^-1, we have 0 = θ(∑_x∈ U\ GH(x)W_f(x)) = ∑_x∈ U\ GH(x)θ(W_f(x)) =∑_x∈ U\ GH(x)f(x). We established in <ref> that _N_n^G_nψ is the multiplicity-free direct sum of the P(π) for π irreducible generic, and as such is spanned by f ∈ P(π). Since _U^Gψ_k×_U^Gψ_k^-1 → k (H,f) →∑_x∈ U\ GH(x)f(x) is a nondegenerate duality pairing, we conclude that H is identically zero. §.§ Proof of converse theorem We finally arrive at the proof of <ref>. Our strategy is inspired by the proof of the converse theorem in <cit.>. If ρ_1 and ρ_2 are cuspidal k-representations of G_n, set S(ρ_1,ρ_2,ψ) := {(W_1,W_2)∈𝒲(ρ_1,ψ_k)×𝒲(ρ_2,ψ_k): W_1|_P_n = W_2|_P_n}. There is a diagonal action of P_n on 𝒲(ρ_1,ψ_k)×𝒲(ρ_2,ψ_k) and the subspace S(ρ_1,ρ_2,ψ) is stable under this action by its definition. We will show it is in fact G_n-stable if we suppose that ρ_1 and ρ_2 have the same gamma factors. Let ρ_1 and ρ_2 be cuspidal k-representations of G_n and suppose that γ̃(ρ_1×π,ψ) = γ̃(ρ_2×π,ψ) for all irreducible generic representations π of G_n-1. Then S(ρ_1,ρ_2,ψ) is stable under the diagonal action of G_n. The restriction of a Whittaker function to P_n is determined by its values on G_n-1 (embedded in G_n in the top left). Therefore: (W_1,W_2)∈ S(ρ_1,ρ_2,ψ) ⇔ W_1([ x 0; 0 1 ]) = W_2([ x 0; 0 1 ]) for all x ∈ G_n-1 <ref>∑_x∈ U_n-1\ G_n-1W_1([ x 0; 0 1 ])W'(x) = ∑_x∈ U_n-1\ G_n-1W_2([ x 0; 0 1 ])W'(x) for all W'∈𝒲(P(π),ψ_R(π)^-1), for all π equality of γ̃'s∑_x∈ U_n-1\ G_n-1W_1([ 0 1; x 0 ])W'(x) = ∑_x∈ U_n-1\ G_n-1W_2([ 0 1; x 0 ])W'(x) for all W'∈𝒲(P(π),ψ_R(π)^-1), for all π <ref> W_1([ 0 1; ^ι x 0 ]) = W_2([ 0 1; ^ι x 0 ]) for all x ∈ G_n-1 ⇔W_1([ x 0; 0 1 ]) = W_2([ x 0; 0 1 ]) for all x ∈ G_n-1 ⇔ (W_1,W_2)∈ S(ρ_1^∨,ρ_2^∨,ψ^-1) Now if p∈^tP_n we have, for i=1,2, pW_i(g) = (^ιpW_i)(g). Thus if (W_1,W_2)∈ S(ρ_1,ρ_2,ψ), then since S(ρ_1^∨,ρ_2^∨,ψ^-1) is P_n-stable we have (pW_1,pW_2) = (^ιpW_1,^ιpW_2)∈ S(ρ_1^∨,ρ_2^∨,ψ^-1). The above equivalences then imply that (pW_1,pW_2) is in S(ρ_1,ρ_2,ψ). Thus we have shown that S(ρ_1,ρ_2,ψ) is stable under both P_n and ^tP_n. Since these two groups generate G_n we conclude that S(ρ_1,ρ_2,ψ) is stable under G_n. Suppose ρ_1 and ρ_2 are irreducible cuspidal representations of G over k and suppose that γ̃(ρ_1×π,ψ) = γ̃(ρ_2×π,ψ) for every irreducible generic representation π of G_n-1. Let W_1, W_2 be elements of the Whittaker spaces (ρ_1,ψ_k), (ρ_2,ψ_k), respectively. Then the following equivalence holds W_1|_P_n = W_2|_P_n if and only if W_1 = W_2. Let W_1 ∈(ρ_1,ψ_k) and W_2 ∈(ρ_2,ψ_k) such that W_1P_n = W_2P_n. Then for all g ∈ G_n, <ref> implies that (gW_1)P_n = (gW_2)P_n. Evaluating at the identity, we see that W_1(g) = (gW_1)(1) = (gW_2)(1) = W_2(g), so W_1=W_2. Let ρ∈_k(G_n) be irreducible cuspidal and fix W ∈𝒲(ρ,ψ_k). If W|_P_n=0 then W=0. In other words the map 𝒲(ρ,ψ_k) →_N_n^P_nψ_k W ↦ W|_P_n is injective (hence an isomorphism of P_n-modules by <ref>). Let ρ_1 and ρ_2 be cuspidal k-representations of G_n and suppose that γ̃(ρ_1×π,ψ) = γ̃(ρ_2×π,ψ) for all irreducible generic representations π of G_n-1. Then ρ_1≅ρ_2. By the previous corollary, for every W_1∈𝒲(ρ_1,ψ_k) there is a unique W_2∈𝒲(ρ_2,ψ_k) such that W_1|_P_n=W_2|_P_n. This gives a morphism of k[G_n]-modules 𝒲(ρ_1,ψ_k)→ S(ρ_1,ρ_2,ψ). Projection on the second factor gives a composite morphism 𝒲(ρ_1,ψ_k) → S(ρ_1,ρ_2,ψ_k)→𝒲(ρ_2,ψ_k), which is nonzero and G-equivariant. Since ρ_1 and ρ_2 are irreducible it follows that ρ_1≅ρ_2. toc § COUNTEREXAMPLES TO THE NAIVE CONVERSE THEOREM We used Sage to discover counterexamples to the naive converse theorem mod ℓ for _2(_q), following the explicit computations for gamma factors in Theorem 21.1 of <cit.>. The code can be found in <cit.> Our main function computes _ℓ-valued gamma factors. We found counterexamples to the converse theorem for the pairs (ℓ, q) = (2,5), (2,17), (3,7), (3,19), (5,11), (11,23), (23,47), (29,59). In all of these situations, q = 2ℓ^i+1 for some positive integer i. Informed by this data, we make the following: The naive converse theorem for mod ℓ representations of _2(𝔽_q) fails exactly when q = 2ℓ^i+1 for some value of i>0. Below, we make this conjecture precise by defining the naïve mod ℓ gamma factor. We then describe the algorithm through which we found the counterexamples. §.§ Mod ell gamma factors of GL(2,Fq) Our computations rely on the explicit realizations of gamma factors of cuspidal representations of _2(𝔽_q) as Gauss sums. The specialization of <ref> to n=2, and m=1 recovers the constructions of <cit.>, and extends them to representations valued in any Noetherian [1/p,ζ_p]-algebra R. For simplicity, assume R is a field, let ρ be an irreducible generic representation of _2(_q), and let ω be a character of _1(_q) = _q^× not exceptional for ρ. Then γ(ρ×ω, ψ) is defined by the functional equation γ(ρ×ω, ψ) ∑_x ∈_q^× W[ x 0; 0 1 ]ω^-1(x) = ∑_x ∈_q^× W[ 0 1; x 0 ]ω^-1(x), for any W ∈𝒲(ρ,ψ). Let R = _ℓ with (q,ℓ) = 1 and (ρ,V) be cuspidal. Vigneras <cit.> constructs ρ=ρ_ν from a character ν of _q^2^×. There is an identification V|_P ≃ _N_2^P_2ψ ≃ {f: _q^×→_ℓ}, where the first isomorphism follows from <ref> and the second is restriction to _q^×≤ P_2. In these coordinates, there is a unique Bessel vector f ∈ V satisfying f(x) = δ_x=1, W_f[ x 0; 0 1 ]=δ_x=1, and ρ(n)f=ψ(n)f, n ∈ N_2. The second property together with the functional equation imply that γ(ρ_ν×ω, ψ) = ∑_x ∈_q^× W_f[ 0 1; x 0 ]ω^-1(x) . Using the properties of f, we replicate the computations of <cit.> using the constructions of <cit.> to recover γ(ρ_ν×ω, ψ) = q^-1ν(-1)∑_t ∈_q^2^×ν(t)ω(tt)^-1ψ(t+t), for t = t^q. This realizes the naïve mod ℓ gamma factor as a Gauss sum. §.§ The algorithm The algorithm executes two tasks: * The function computes gamma factors. * The function detects equalities between gamma factors. The function . Let q be prime. To compute the Gauss sums, we exploit that all groups in sight are cyclic. We have the following variables: * is a choice of generator of _q^2^×, * (resp. ) is the largest divisor of q^2-1 (resp. q-1) coprime to ℓ. * is a choice of primitive root of unity of order * is a choice of primitive root of unity of order * is a choice of primitive q^th root of unity, and we fix the additive character ψ: _q →_ℓ^× so that ψ(1) =. This allows us to identify characters of _q^2^× and _q^× with integers in the relevant ranges as follows: * For i ∈ [0,_-1], the character ν_i of _q^2^× is defined by ν_i() = ^i. We will denote the cuspidal representation ρ_ν_i by ρ_i. * For j ∈ [0,_-1], the character ω_j of _q^× is defined by ν_i(^q+1) = ^j. Note that ^q+1 is a generator of _q^×. The input of the function is the triple (,,). Letting = ((/)), the function returns := * ( (*)* (*)* ( + (*)) [..]) which computes q ·γ(ρ_i,ω_j) = ν_i(-1)∑_k=0^m-1ν_i(^k)ω_j(^(q+1)· k)ψ(^k+^qk). The function . This function compares the output of the function for different values of and . First recall that, ρ_i ≃ρ_i' for i ≠ i' precisely when ν_i = ν_i', i.e. if i' ≡ q· i mod m. The function first runs over all isomorphism classes of cuspidal representations ρ_j, removes duplicates, and records a list of integers  corresponding to a list of non-duplicate ρ_j. In order to reduce runtime and avoid computing unnecessary gamma factors, it next computes for all values of in the list . If two values and have the same gamma factor γ(ρ_i,1), they are added to the list . Finally, the function returns an array of for all in and in the range . The function then runs the utility function , which takes as an input an array and returns a list of duplicates among the rows of the array. Finally, the function prints the list of duplicates. Currently, the speed of the algorithm is restricted by the actual computations of the Gauss sums, which runs at least in O(q^2). § ELL-REGULAR GAMMA FACTORS FOR GL(2) In this appendix we construct an “ℓ-regular” gamma factor for pairs (ρ,ω) where ρ is a mod ℓ representations of _2(_q) and ω is a mod ℓ representation of _1(_q). This modified factor is constructed by restricting to subgroups of matrices with ℓ-regular determinant. Namely, the linear functionals giving rise to the gamma factor are defined as sums over these subgroups. In the mirabolic subgroup, the elements with ℓ-regular determinant have ℓ-regular order and form a subgroup. The failure of this property for n > 2 prevents us from extending the strategy. For simplicity, unlike in the main part of this article we only construct the ℓ-regular gamma factors for cuspidal representations. One could probably also treat Whittaker type representations, taking into account exceptional pairs, but we don't pursue this. §.§ Preliminaries As before, let ℓ be a prime different from p and let k be a field of characteristic ℓ that is sufficiently large (this means k contains all the m-th roots of unity where m is the l.c.m. of all the orders of elements of _2(_q)). We write G_n = _n(_q) as before, and will focus on G_2. We regard G_1 ⊂ G_2 as sitting in the top left. Again we work with the mirabolic subgroup P_2 = N_2 ⋊ G_1 ⊂ G_2. We let _2 denote the opposite mirabolic subgroup. We fix an nontrivial group homomorphism ψ: _q → k^×, and view it as a character ψ: N_2 → k^× via the canonical isomorphism N_2 _q. We now define some auxiliary subgroups. First let _q^×,ℓ denote the subgroup of _q^× consisting of ℓ-regular elements, i.e. elements whose orders are not divisible by ℓ. Then let G_2^ℓ := ^-1(_q^×,ℓ) denote the subgroup of matrices with ℓ-regular determinant, and let G_1^ℓ = G_1 ∩ G_2^ℓ and P_2^ℓ := P_2 ∩ G_2^ℓ and _2^ℓ = _2 ∩ G^ℓ. Note P_2^ℓ = N_2 ⋊ G_1^ℓ. The group generated by P_2^ℓ and _2^ℓ is G_2^ℓ. Let H be the subgroup of G_2 generated by P_2^ℓ and _2^ℓ. Clearly P_2^ℓ⊂ G_2^ℓ and _2^ℓ⊂ G_2^ℓ, and thus H⊂ G_2^ℓ. For the opposite inclusion we argue as follows. By row reduction, _2(_q) is generated by the elementary matrices with 1's on the diagonal and a single nonzero entry off the diagonal. Namely, _2(_q) is generated by N_2 and _2. But N_2 ⊂ P_2^ℓ and _2 ⊂_2^ℓ so _2(_q) ⊂ H. We are done if for every element a of _q^×,ℓ we can find an element h ∈ H such that (h) = a (for then H contains a full set of representatives for G_2^ℓ/_2(_q)). But we can just take (a,1). If G is a finite group, H ◃ G and ρ is an irreducible representation of G, then ρ|_H = ⊕_i=1^r ρ_i^e with ρ_i irreducible and ρ_i ≁ρ_j for i ≠ j. The isotypic components ρ_i^e are permuted transitively under conjugation by G and ρ = __G(ρ_i)^G(ρ_i^e) for any i. Since G acts by conjugation on the set ρ_1,…,ρ_r it follows that H ⊂_G(ρ_i). But G acts transitively so r = [G : _G(ρ_i)], which divides [G:H]. If (ρ,V) is an irreducible generic representation of G_2 then _k _N_2(V,ψ) = 1. <cit.> proves that _k _P_2(_N_2^P_2ψ, V) = 1. Equivalently, _k _N_2(ψ, V) = 1 but N_2 is abelian of order prime to ℓ so V|_N_2 splits as the direct sum of characters, so V|_N_2 contains ψ once. Thus _k _N_2(V, ψ) = 1. If ρ is an irreducible generic cuspidal representation then V|_P_2≅_N_2^P_2ψ. Furthermore _N_2^P_2ψ is irreducible. §.§ Definition of the gamma-factor Fix an irreducible generic cuspidal representation (ρ,V) of G_2. By <ref> we get a decomposition ρ|_G_2^ℓ = ⊕_i=1^r ρ_i^e where each (ρ_i,V_i) is an irreducible representation of G_2^ℓ and G_2 permutes them transitively. Moreover, ρ = __G_2(ρ_i)^G_2ρ_i^e for any i. The restriction ρ|_G_2^ℓ is multiplicity free. In other words, e = 1. If N_2 denotes the group of characters N_2 → k^× then by <ref> we have ρ|_N_2 = ⊕_χ≠ 1 ∈N_2χ. Each element of N_2 is ℓ-regular and so N_2 ⊂ G_2^ℓ, but then any two irreducible constituents of ρ|_G_2^ℓ cannot be isomorphic because their restrictions to N_2 are not isomorphic. By <ref>, 1 = _k _N_2(V, ψ) = _k _G_2^ℓ(V, _N_2^G_2^ℓψ) = _k (⊕_i=1^r _G_2^ℓ (V_i, _N_2^G_2^ℓψ)) so there exists a unique i_ψ such that _G_2^ℓ(V_i_ψ, _N_2^G_2^ℓψ) ≠ 0 (and is one dimensional). Write (ρ_ψ,V_ψ) for the representation (ρ_i_ψ,V_i_ψ). Fix a generator W_ψ: V_ψ↪_N_2^G_2^ℓψ. The image of W_ψ is denoted ^ℓ(V_ψ) and is called the ℓ-regular Whittaker model of ρ. _N_2^P_2^ℓψ is irreducible. Note N_2 is a normal subgroup of P_2^ℓ with quotient isomorphic to the abelian group G_1^ℓ, so we can write _P_2^ℓ(_N_2^P_2^ℓψ) = _N_2(ψ, _N_2^P_2^ℓ_N_2^P_2^ℓψ) = _N_2(ψ, ⊕_g ∈ G_1^ℓ (x ↦ψ(gx))), which is clearly one dimensional since x ↦ψ(gx) is not equal to ψ for any g ∈ G_1^ℓ except when g = 1. Since P_2^ℓ has order prime to ℓ, the result follows from basic character theory. The composition V_ψ_N_2^G_2^ℓψ_N_2^P_2^ℓψ is an isomorphism of P_2^ℓ-representations. By construction it is a morphism of P_2^ℓ-representations. Both V_ψ and _N_2^P_2^ℓψ are irreducible, so we just need to show that the composition is nonzero. But the Frobenius reciprocity isomorphism _G_2^ℓ(V_ψ, _N_2^G_2^ℓψ) _P_2^ℓ(V_ψ, _N_2^P_2^ℓψ) is precisely composition with ^G_2^ℓ_P_2^ℓ and the fact that W_ρ is nonzero means that its image under the above isomorphism is as well. r = [P_2:P_2^ℓ]. Consequently, _G_2(ρ_ψ) = G_2^ℓ. r = _k ρ/_k ρ_ψ = _k _N_2^P_2ψ/_k _N_2^P_2^ℓψ = [P_2:N_2]/[P_2^ℓ:N_2] = [P_2:P_2^ℓ]. Since ρ = __G(ρ_ψ)ρ_ψ, we obtain [G_2:G_2^ℓ] = [P_2:P_2^ℓ] = [G_2 : _G_2(ρ_ψ)] so the inclusion _G_2(ρ_ψ) ⊂ G_2^ℓ is an equality. Next we prove the key one-dimensionality result that lets us deduce the existence of the gamma factor as the ratio between two linear functionals in a functional equation. Note that because k has characteristic ℓ any character ω: G_1 → k^× is uniquely determined by its values on G_1^ℓ. For any ω: G_1 → k^×, _k _G_1^ℓ(^ℓ(ρ_ψ) ⊗ω, 1) = 1. Note _G_1^ℓ(^ℓ(ρ_ψ) ⊗ω, 1) = _G_1^ℓ(V_ψ,ω^-1). By <ref> we have V_ψ|_P_2^ℓ≅_N_2^P_2^ℓψ. The map _N_2^P_2^ℓψ k[G_1^ℓ] f ↦(x ↦ f[ x 0; 0 1 ]) gives an isomorphism with the regular representation. But G_1^ℓ is a cyclic group of order prime to ℓ so k[G_1^ℓ] contains every k-valued character of G_1^ℓ with multiplicity one. Following <ref>, for W ∈^ℓ(ρ_ψ) and ω: G_1 → k^× a character we define I^ℓ(W, ω) = ∑_x ∈ G_1^ℓ W[ x 0; 0 1 ]ω^-1(x) I^ℓ(W,ω) = ∑_x ∈ G_1^ℓ W[ 0 1; x 0 ]ω^-1(x) Then I^ℓ(W, ω), I^ℓ(W,ω) ∈_G_1^ℓ(^ℓ(ρ_ψ) ⊗ω, 1) are two nonzero elements. But in view of <ref> this space is 1-dimensional, so we make the following definition. For ω: _q^×→ k^× a character, the ℓ-regular gamma factor γ^ℓ(ρ×ω,ψ) ∈ k is the unique (nonzero) element satisfying I^ℓ(W,ω)γ^ℓ(ρ×ω, ψ) = I^ℓ(W,ω). §.§ Converse theorem We now show that the ℓ-regular factor satisfies a converse theorem; our strategy mirrors that of <ref>. Suppose (ρ_1,V_1) and (ρ_2,V_2) are two irreducible generic cuspidal k-linear representations of G_2 and further suppose that γ^ℓ(ρ_1 ×ω,ψ) = γ^ℓ(ρ_2 ×ω,ψ) for all ω: G_1^×→ k^×. Let _1^ℓ = ^ℓ(V_1,ψ) and _2^ℓ = ^ℓ(V_2,ψ). Let S(ρ_1,ρ_2,ψ) := (W_1, W_2) ∈^ℓ_1 ×^ℓ_2 : W_1|_P_2^ℓ = W_2|_P_2^ℓ. By definition there is a diagonal action of G_2^ℓ on S(ρ_1,ρ_2,ψ) and S(ρ_1,ρ_2,ψ) is P_2^ℓ-stable for this action. If g ∈ G_2^ℓ and (W_1, W_2) ∈ S(ρ_1,ρ_2,ψ), then (gW_1, gW_2) ∈ S(ρ_1,ρ_2,ψ). First note that since W_1,W_2 are Whittaker functions, (W_1,W_2) ∈ S(ρ_1,ρ_2,ψ) W_1[ x 0; 0 1 ] = W_2[ x 0; 0 1 ] for all x ∈ G_1^ℓ Artin's Lemma I^ℓ(W_1,ω) = I^ℓ(W_2,ω) for all ω equality of γ^ℓI^ℓ(W_1,ω) = I^ℓ(W_2,ω) for all ω Artin's Lemma W_1[ 0 1; x 0 ] = W_2[ 0 1; x 0 ] for all x ∈ G_1^ℓ W_1[ x 0; 0 1 ] = W_2[ x 0; 0 1 ] for all x ∈ G_1^ℓ (W_1,W_2) ∈ S(ρ_1^∨,ρ_2^∨,ψ^-1) Here Artin's Lemma (the n=1 version of completeness of Whittaker models) refers to the dual statement to linear independence of characters, which holds for k-valued characters of an abelian group H, provided that char(k) ∤H, see <cit.>. Now if p∈_2^ℓ and W ∈(ρ_i,ψ) (for i = 1,2), then for all g ∈ G_2^ℓ pW(g) = (pW)([ 0 1; 1 0 ](^ι g)) = W([ 0 1; 1 0 ](^ι g)p) = W([ 0 1; 1 0 ](^ι(g(^ιp)))) = W(g(^ιp)) = (^ιpW)(g). Thus if (W,W') ∈ S(ρ_1,ρ_2,ψ) then (pW, pW') = (^ιpW, ^ιpW') ∈ S(ρ^∨,σ^∨,ψ^-1) since S(ρ^∨,σ^∨,ψ^-1) is P_2^ℓ-stable and ^ιp∈ P_2^ℓ. By the above equivalences we see that (pW,pW') ∈ S(ρ_1,ρ_2,ψ). We conclude by noting that P_2^ℓ and _2^ℓ generate G_2^ℓ. If W_1 ∈^ℓ_1 and W_2 ∈^ℓ_2, then W_1P_2^ℓ = W_2P_2^ℓ if and only if W_1 = W_2. If W_1P_2^ℓ = W_2P_2^ℓ then for all g ∈ G_2^ℓ, <ref> implies that (gW_1)P_2^ℓ = (gW_2)P_2^ℓ. Evaluating at the identity, we see that W_1(g) = (gW_1)(1) = (gW_2)(1) = W_2(g). ρ_1 ≅ρ_2. Since ρ_1 = _G_2^ℓ^G_2ρ_1,ψ and ρ_2 = _G_2^ℓ^G_2ρ_2,ψ, it suffices to show that _1^ℓ = _2^ℓ, since then ρ_1,ψ≅ρ_2,ψ. But _1^ℓ|_P_2 = _2^ℓ|_P_2, so we apply <ref> to conclude.
http://arxiv.org/abs/2307.04173v1
20230709133912
Budgeted Matroid Maximization: a Parameterized Viewpoint
[ "Ilan Doron-Arad", "Ariel Kulik", "Hadas Shachnai" ]
cs.DS
[ "cs.DS" ]
Electron-phonon driven charge density wave in CuTe. Matteo Calandra Received 27 February 2023; accepted 23 May 2023 =================================================== We study budgeted variants of well known maximization problems with multiple matroid constraints. Given an ℓ-matchoid on a ground set E, a profit function p:E →ℝ_≥ 0, a cost function c:E →ℝ_≥ 0, and a budget B ∈ℝ_≥ 0, the goal is to find in the ℓ-matchoid a feasible set S of maximum profit p(S) subject to the budget constraint, i.e., c(S) ≤ B. The budgeted ℓ-matchoid (BM) problem includes as special cases budgeted ℓ-dimensional matching and budgeted ℓ-matroid intersection. A strong motivation for studying BM from parameterized viewpoint comes from the APX-hardness of unbudgeted ℓ-dimensional matching (i.e., B = ∞) already for ℓ = 3. Nevertheless, while there are known FPT algorithms for the unbudgeted variants of the above problems, the budgeted variants are studied here for the first time through the lens of parameterized complexity. We show that BM parametrized by solution size is W[1]-hard, already with a degenerate single matroid constraint. Thus, an exact parameterized algorithm is unlikely to exist, motivating the study of FPT-approximation schemes (FPAS). Our main result is an FPAS for BM (implying an FPAS for ℓ-dimensional matching and budgeted ℓ-matroid intersection), relying on the notion of representative set - a small cardinality subset of elements which preserves the optimum up to a small factor. We also give a lower bound on the minimum possible size of a representative set which can be computed in polynomial time. § INTRODUCTION Numerous combinatorial optimization problems can be interpreted as constrained budgeted problems. In this setting, we are given a ground set E of elements and a family ⊆ 2^E of subsets of E known as the feasible sets. We are also given a cost function c:E→ℝ, a profit function p:E→ℝ, and a budget B ∈ℝ. A solution is a feasible set S ∈ of bounded cost c(S) ≤ B.[For a function f:A →ℝ and a subset of elements C ⊆ A, define f(C) = ∑_e ∈ C f(e).] Broadly speaking, the goal is to find a solution S of maximum profit. Notable examples include budgeted matching <cit.> and budgeted matroid intersection <cit.>, shortest weight-constrained path <cit.>, and constrained minimum spanning trees <cit.>. Despite the wide interest in constrained budgeted problems in approximation algorithms, not much is known about this intriguing family of problems in terms of parameterized complexity. In this work, we study budgeted maximization with the fairly general ℓ-dimensional matching, ℓ-matroid intersection, and ℓ-matchoid constraints. An ℓ-dimensional matching constraint is a set system (E,), where E ⊆ U_1 ×…× U_ℓ for ℓ sets U_1, …, U_ℓ. The feasible sets are all subsets S ⊆ E which satisfy the following. For any two distinct tuples (e_1,…, e_ℓ), (f_1,…, f_ℓ) ∈ S and every i ∈ [ℓ] it holds that e_i ≠ f_i.[For any k ∈ℕ let [k] = {1,2,…,k}.] Informally, the input for budgeted ℓ-dimensional matching is an ℓ-dimensional matching constraint (E,), profits and costs for the elements in E, and a budget. The objective is to find a feasible set which maximizes the profit subject to the budget constraint (see below the formal definition). We now define an ℓ-matroid intersection. A matroid is a set system (E, ), where E is a finite set and ⊆ 2^E, such that * ∅∈. * The hereditary property: for all A ∈ and B ⊆ A it holds that B ∈. * The exchange property: for all A,B ∈ where |A| > |B| there is e ∈ A ∖ B such that B ∪{e}∈. For a fixed ℓ≥ 1, let (E,_1), (E,_2), …, (E,_ℓ) be ℓ matroids on the same ground set E. An ℓ-matroid intersection is a set system (E,) where = _1 ∩_2 ∩…∩_ℓ. Observe that ℓ-dimensional matching, where E ⊆ U_1 ×…× U_ℓ, is a special case of ℓ-matroid intersection: For each i ∈ [ℓ], define a partition matroid (E,_i), where any feasible set S ∈_i may contain each element e ∈ U_i in the i-th coordinate at most once, i.e., _i = {S ⊆ E | ∀ (e_1,…, e_ℓ) ≠ (f_1,…, f_ℓ) ∈ S : e_i ≠ f_i}. We give an illustration in Figure <ref>. It can be shown that (E,_i) is a matroid for all i ∈ℓ (see, e.g., <cit.>). The above constraint families can be generalized to the notion of ℓ-matchoid. Informally, an is an intersection of an unbounded number of matroids, where each element belongs to at most ℓ of the matroids. Formally, for any ℓ≥ 1, an on a set E is a collection = { M_i = (E_i, _i) }_i ∈ [s] of s ∈ℕ matroids, where for each i ∈ [s] it holds that E_i ⊆ E, and every e ∈ E belongs to at most ℓ sets in {E_1, …, E_s}, i.e., |{i∈ [s]  |  e∈ E_i}| ≤ℓ. A set S ⊆ E is feasible for if for all i ∈ [s] it holds that S ∩ E_i ∈_i. Let () = {S⊆ E  | ∀ i ∈ [s]: S∩ E_i∈_i} be all feasible sets of . For all k ∈ℕ, we use _k ⊆() to denote all feasible sets of of cardinality at most k. Clearly, ℓ-matroid intersection (and also ℓ-dimensional matching) is the special case of where the s (= ℓ) matroids are defined over the same ground set E. In the budgeted ℓ-matchoid (BM) problem, we are given an ℓ-matchoid along with a cost function, profit function, and a budget; our goal is to maximize the profit of a feasible set under the budget constraint. The budgeted ℓ-matroid intersection (BMI) and budgeted ℓ-dimensional matching (BDM) are the special cases where the is an ℓ-matroid intersection and ℓ-dimensional matching, respectively. Each of these problems generalizes the classic 0/1-knapsack, where all sets are feasible. Figure <ref> shows the relations between the problems. Henceforth, we focus on the BM problem. Formally, a BM instance is a tuple I = (E, , c,p, B,k,ℓ), where E is a ground set of elements, is an on E, c:E →ℕ_> 0 is a cost function, p:E →ℕ_> 0 is a profit function, B ∈ℕ_> 0 is a budget, and k,ℓ∈ℕ_> 0 are integer parameters.[We assume integral values for simplicity; our results can be generalized also for real values.] In addition, each matroid (E_i,_i) ∈ has a membership oracle, which tests whether a given subset of E_i belongs to _i or not in a single query. A solution of I is a feasible set S ∈_k such that c(S) ≤ B. The objective is to find a solution S of I such that p(S) is maximized. We consider algorithms parameterized by k and ℓ (equivalently, k+ℓ). We note that even with no budget constraint (i.e., c(E)< B), where the is restricted to be a 3-dimensional matching, BM is MAX SNP-complete <cit.>, i.e., it cannot admit a polynomial time approximation scheme (PTAS) unless P=NP. On the other hand, the ℓ-dimensional matching and even the problem (without a budget), parameterized by ℓ and the solution size k, are fixed parameter tractable (FPT) <cit.>. This motivates our study of BM through the lens of parameterized complexity. We first observe that BM parameterized by the solution size is W[1]-hard, already with a degenerate matroid where all sets are feasible (i.e., knapsack parametrized by the cardinality of the solution, k). BM is W[1]-hard. By the hardness result in Lemma <ref>, the best we can expect for BM in terms of parametrized algorithms, is an FPT-approximation scheme (FPAS). An FPAS with parameterization κ for a maximization problem Π is an algorithm whose input is an instance I of Π and an >0, which produces a solution S of I of value (1-) ·(I) in time f(,κ(|I|)) · |I|^O(1) for some computable function f, where |I| denotes the encoding size of I and (I) is the optimum value of I. We refer the reader to <cit.> for comprehensive surveys on parameterized approximation schemes and parameterized approximations in general. To derive an FPAS for BM, we use a small cardinality representative set, which is a subset of elements containing the elements of an almost optimal solution for the instance. The representative set has a cardinality depending solely on ℓ,k,^-1 and is constructed in FPT time. Formally, Let I = (E, , c,p, B,k,ℓ) be a BM instance, 0<<1/2 and R ⊆ E. Then R is a representative set of I and if there is a solution S of I such that the following holds. * S ⊆ R. * p(S) ≥ (1-2) ·(I). We remark that Definition <ref> slightly resembles the definition of lossy kernel <cit.>. Nonetheless, the definition of lossy kernel does not apply to problems in the oracle model, including BM (see Section <ref> for further details). The main technical contribution of this paper is the design of a small cardinality representative set for BM. Our representative set is constructed by forming a collection of f(ℓ, k,^-1) profit classes, where the elements of each profit class have roughly the same profit. Then, to construct a representative set for the instance, we define a residual problem for each profit class which enables to circumvent the budget constraint. These residual problems can be solved efficiently using a construction of <cit.>. We show that combining the solutions for the residual problems, we obtain a representative set. In the following, we use O(n) for O(n ·poly(log (n))). There is an algorithm that given a BM instance I = (E, , c,p, B,k,ℓ) and 0< <1/2, returns in time |I|^O(1) a representative set R ⊆ E of I and such that |R| = O(ℓ^(k-1) ·ℓ· k^2 · ^-2). Given a small cardinality representative set, it is easy to derive an FPAS. Specifically, using an exhaustive enumeration over the representative set as stated in Lemma <ref>, we can construct the following FPAS for BM, which naturally applies also for BMI and BDM. For any BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2, there is an FPAS whose running time is |I|^O(1)·O( ℓ^k^2 ·ℓ· k^O(k)·^-2k). To complement the above construction of a representative set, we show that even for the special case of an ℓ-dimensional matching constraint, it is unlikely that a representative set of significantly smaller cardinality can be constructed in polynomial time. The next result applies to the special case of BDM. For any function f:ℕ→ℕ, and c_1,c_2 ∈ℝ such that c_2-c_1<0, there is no algorithm which finds for a given BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2 a representative set of size O ( f(ℓ) · k^ℓ-c_1·1/^c_2) of I and in time |I|^O(1), unless coNP⊆NP / poly. In the proof of Lemma <ref>, we use a lower bound on the kernel size of the Perfect 3-Dimensional Matching (3-PDM) problem, due to Dell and Marx <cit.>.[We refer the reader e.g., to <cit.>, for the formal definition of kernels.] In our hardness result, we are able to efficiently construct a kernel for 3-PDM using a representative set for BM, already for the special case of 3-dimensional matching constraint, uniform costs, and uniform profits. §.§ Related Work While BM is studied here for the first time, special cases of the problem have been extensively studied from both parameterized and approximative points of view. For maximum weighted without a budget constraint, Huang and Ward <cit.> obtained a deterministic FPT algorithm, and algorithms for a more general problem, involving a coverage function objective rather than a linear objective. Their result differentiates the problem from the matroid ℓ-parity problem which cannot have an FPT algorithm in general matroids <cit.>. Interestingly, when the matroids are given a linear representation, the matroid ℓ-parity problem admits a randomized FPT algorithm <cit.> and a deterministic FPT algorithm <cit.>. We use a construction of <cit.> as a building block of our algorithm. The ℓ-dimensional k-matching problem (i.e., the version of the problem with no budget parametrized by k and ℓ) has received considerable attention in previous studies. Goyal et al. <cit.> presented a deterministic FPT algorithm whose running time is O^*(2.851^(ℓ-1) · k) for the weighted version of ℓ-dimensional k-matching, where O^* is used to suppress polynomial factor in the running time. This result improves a previous result of <cit.>. For the unweighted version of ℓ-dimensional k-matching, the state of the art is a randomized FPT algorithm with running time O^*(2^(ℓ-2) · k) <cit.>, improving a previous result for the problem <cit.>. Budgeted problems are well studied in approximation algorithms. As BM is a generalization of classic 0/1-knapsack, it is known to be NP-hard. However, while knapsack admits a fully PTAS (FPTAS) <cit.>, BM is unlikely to admit a PTAS, even for the special case of 3-dimensional matching with no budget constraint <cit.>. Consequently, there has been extensive research work to identify special cases of BM which admit approximation schemes. For the budgeted matroid independent set (i.e., the special case of BM where the consists of a single matroid), Doron-Arad et al. <cit.> developed an efficient PTAS (EPTAS) using the representative set based technique. This algorithm was later generalized in <cit.> to tackle budgeted matroid intersection and budgeted matching (both are special cases of BM where ℓ = 2), improving upon a result of Berger et al. <cit.>. We generalize some of the technical ideas of <cit.> to the setting of ℓ-matchoid and parametrized approximations. Organization of the paper: Section <ref> describes our construction of a representative set. In Section <ref> we present our FPAS for BM. Section <ref> contains the proofs of the hardness results given in Lemma <ref> and in Lemma <ref>. In Section <ref> we present an auxiliary approximation algorithm for BM. We conclude in Section <ref> with a summary and some directions for future work. § REPRESENTATIVE SET In this section we construct a representative set for BM. Our first step is to round the profits of a given instance, and to determine the low profit elements that can be discarded without incurring significant loss of profit. We find a small cardinality representative set from which an almost optimal solution can be selected via enumeration yielding an FPAS (see Section <ref>). We proceed to construct a representative set whose cardinality depends only on ^-1,k, and ℓ. This requires the definition of profit classes, namely, a partition of the elements into groups, where the elements in each group have similar profits. Constructing a representative set using this method requires an approximation of the optimum value of the input BM instance I. To this end, we use a 1/2 ℓ-approximation α = (I) of the optimum value (I) described below. Given a BM instance I = (E, , c,p, B,k,ℓ), there is an algorithm which returns in time |I|^O(1) a value α such that (I)/2ℓ≤α≤(I). The proof of Lemma <ref> is given in Section <ref>. The proof utilizes a known approximation algorithm for the unbudgeted version of BM <cit.> which is then transformed into an approximation algorithm for BM using a technique of <cit.>. The first step in designing the profit classes is to determine a set of profitable elements. required for obtaining an almost optimal solution. This set allows us to construct only a small number of profit classes. We define the set of profitable elements w.r.t. I, α, and as H[I,α,] = { e ∈ E | ·α/k < p(e) ≤ 2·ℓ·α}. When clear from the context, we simply use H = H[I,α,]. Consider the non-profitable elements. The next lemma states that omitting these elements indeed has small effect on the profit of the solution set. For every BM instance I = (E, , c,p, B,k,ℓ), (I)/2ℓ≤α≤(I), 0<<1/2, and S ∈_k it holds that p ( S ∖ H[I,α,] ) ≤·(I). We note that p ( S ∖ H[I,α(I),] ) ≤ k ··α/k = ·α≤·(I). The first inequality holds since each element in S ∖ H[I,α(I),] has profit at most ·α/k by (<ref>); in addition, since S ∈_k it follows that S contains at most k elements. The second inequality holds as α≤(I). Using Lemma <ref>, our representative set can be constructed exclusively from profitable elements. We can now partition the profitable elements into a small number of profit classes. There is a profit class r for a suitable range of profit values. Specifically, let D(I,) = {r ∈ℕ_>0 |  (1-)^r-1≥/2 ·ℓ· k}, and we simplify by D = D(I,). For all r ∈ D, and (I)/2ℓ≤α≤(I), define the r-profit class as _r(α) = {e ∈ E |  p(e)/2 ·ℓ·α∈( (1-)^r, (1-)^r-1]}. In words, each profit class r ∈ D contains profitable elements (and may contain some elements that are almost profitable due to our 1/2ℓ-approximation for (I)), where the profits of any two elements that belong to the r-profit class can differ by at most a multiplicative factor of (1-). We use the following simple upper bound on the number of profit classes. For every BM instance I and 0<<1/2 there are O( k ·ℓ·^-2) profit classes. We note that log_1-(/2 ℓ· k) ≤ln(2 ℓ· k/)/-ln(1-)≤ 2 ℓ· k ·^-1/. The second inequality follows from x< -ln (1-x), ∀ x>-1, x ≠ 0, and ln (y) < y, ∀ y>0. By (<ref>) the number of profit classes is bounded by |D| ≤log_1-( / 2 ℓ· k)+1 = O( k ·ℓ·^-2). The last inequality follows from (<ref>). Next, we define an exchange set for each profit class. This facilitates the construction of a representative set. Intuitively, a subset of elements X forms an exchange set for a profit class _r(α) if any feasible set Δ and element a ∈ (Δ∩_r(α)) ∖ X can be replaced (while maintaining feasibility) by some element b ∈ (X ∩_r(α)) ∖Δ such that the cost of b is upper bounded by the cost of a. Formally, Let I = (E, , c,p, B,k,ℓ) be a BM instance, 0<<1/2, (I)/2ℓ≤α≤(I), r ∈ D (I,), and X ⊆_r(α). We say that X is an exchange set for I,,α, and r if: * For all Δ∈_k and a ∈ (Δ∩_r(α)) ∖ X there is b ∈ (_r(α) ∩ X) ∖Δ satisfying * c(b) ≤ c(a). * Δ-a+b ∈_k. The key argument in this section is that if a set R ⊆ E satisfies that R ∩_r(α) is an exchange set for any r ∈ D, then R is a representative set. This allows us to construct a representative set using a union of disjoint exchange sets, one for each profit class. We give an illustration in Figure <ref>. Let I = (E, , c,p, B,k,ℓ) be a BM instance, 0<<1/2, (I)/2ℓ≤α≤(I), and R ⊆ E. If for all r ∈ D = D(I,) it holds that R ∩_r(α) is an exchange set for I,,α, and r, then R is a representative set of I and . For the proof of Lemma <ref>, we define a substitution of some feasible set G ∈_k. We will use G later only as an optimal solution; however, we can state the following claims for a general G ∈_k. We require that a substitution preserves the number of profitable elements in G from each profit class, so a substitution guarantees a profit similar to the profit of G. For G ∈_k and Z_G ⊆⋃_r ∈ D_r(α), we say that Z_G is a substitution of G if the following holds. * Z_G ∈_k. * c(Z_G) ≤ c(G). * For all r ∈ D it holds that |_r(α) ∩ Z_G| = |_r(α) ∩ G|. Proof of Lemma <ref>: We first show that every set G∈_k has a substitution which is a subset of R. For any G ∈_k there is a substitution Z_G of G such that Z_G ⊆ R. Let G ∈_k and let Z_G be a substitution of G such that |Z_G ∩ R| is maximal among all substitutions of G; formally, let 𝒮(G) be all substitutions of G and let Z_G ∈{ Z ∈𝒮(G) |  |Z ∩ R| = max_Z' ∈𝒮(G) |Z' ∩ R|}. Since G ∩⋃_r ∈ D_r(α) is in particular a substitution of G it follows that 𝒮(G)≠∅; thus, Z_G is well defined. Assume towards a contradiction that there is a ∈ Z_G ∖ R; then, by Definition <ref> there is r ∈ D such that a ∈_r(α). Because R ∩_r(α) is an exchange set for I,,α, and r, by Definition <ref> there is b ∈ (_r(α) ∩ R) ∖ Z_G such that c(b) ≤ c(a) and Z_G -a+b ∈_k. Then, the properties of Definition <ref> are satisfied for Z_G-a+b by the following. * Z_G -a +b ∈_k by the definition of b. * c(Z_G-a+b) ≤ c(Z_G) ≤ c(G) because c(b) ≤ c(a). * for all r' ∈ D it holds that |_r'(α) ∩ (Z_G-a+b)| = |_r'(α) ∩ Z_G| = |_r'(α) ∩ G| because a,b ∈_r(α). By the above, and using and Definition <ref>, we have that Z_G+a-b is a substitution of G; that is, Z_G+a-b ∈𝒮(G). Moreover, |R ∩ (Z_G -a+b)|>|R ∩ Z_G| = max_Z ∈𝒮(G) |Z ∩ R|. The first inequality holds since a ∈ Z_G ∖ R and b ∈ R. Thus, we have found a substitution of G which contains more elements in R than Z_G ∈𝒮(G). A contradiction to the definition of Z_G as a substitution of G having a maximum number of elements in R. Hence, Z_G ⊆ R, as required. Let G be an optimal solution for I. We complete the proof of Lemma <ref> by showing that a substitution of G which is a subset of R yields a profit at least (1-2) ·(I). Let H[I,α,] = H be the set of profitable elements w.r.t. I, α and (as defined in (<ref>)). By Claim <ref>, as G ∈_k, it has a substitution Z_G ⊆ R. Then, p(Z_G) ≥ ∑_r ∈ D p(_r(α) ∩ Z_G) ≥ ∑_r ∈ D s.t. _r(α) ≠∅ |_r(α) ∩ Z_G| ·min_e ∈_r(α) p(e) ≥ ∑_r ∈ D s.t. _r(α) ≠∅ |_r(α) ∩ G | · (1-) ·max_e ∈_r(α) p(e) ≥ (1-) · p(G ∩ H). The third inequality follows from (<ref>), and from Property <ref> in Definition <ref>. The last inequality holds since for every e ∈ H there is r ∈ D such that e ∈_r(α), by (<ref>) and (<ref>). Therefore, p(Z_G) ≥ (1-) · p(G ∩ H) = (1-) ·( p(G) - p(G ∖ H) ) ≥ (1-) · p(G)-p(G ∖ H) ≥ (1-) · p(G)- ·(I) = (1-) ·(I)- ·(I) = (1-2) ·(I). The first inequality follows from (<ref>). The last inequality holds by Lemma <ref>. The second equality holds since G is an optimal solution for I. To conclude, by Properties <ref> and <ref> in Definition <ref>, it holds that Z_G ∈_k, and c ( Z_G ) ≤ c(G) ≤ B; thus, Z_G is a a solution for I. Also, by (<ref>), it holds that p ( Z_G ) ≥ (1-2) ·(I) as required (see Definition <ref>). By Lemma <ref>, our end goal of constructing a representative set is reduced to efficiently finding exchange sets for all profit classes. This can be achieved by the following result, which is a direct consequence of Theorem 3.6 in <cit.>.[The result of <cit.> refers to a maximization version of exchange sets; however, the same construction and proof hold for our exchange sets as well.] Given a BM instance I = (E, , c,p, B,k,ℓ), 0< <1/2, (I)/2ℓ≤α≤(I), and r ∈ D (I,), there is an algorithm which returns in time O(ℓ^(k-1) ·ℓ· k) · |I|^O(1) an exchange set X for I,,α, and r, such that |X| = O( ℓ^(k-1) ·ℓ· k). Using Lemmas <ref> and <ref>, a representative set of I can be constructed as follows. If the parameters ℓ and k are too high w.r.t. |I|, return the trivial representative set E in polynomial time. Otherwise, compute an approximation for (I), and define the profit classes. Then, the representative set is constructed by finding an exchange set for each profit class. The pseudocode of the algorithm is given in Algorithm <ref>. Given a BM instance I = (E, , c,p, B,k,ℓ), and 0< <1/2, Algorithm  <ref> returns in time |I|^O(1) a representative set R ⊆ E of I and such that |R| = O(ℓ^(k-1) ·ℓ· k^2 · ^-2). Clearly, if ℓ^(k-1) ·ℓ· k^2 ·^-2 > |I|, then by Step <ref> the algorithm runs in time |I|^O(1) and returns the trivial representative set E. Thus, we may assume below that ℓ^(k-1) ·ℓ· k^2 ·^-2≤ |I|. The running time of Step <ref> is |I|^O(1) by Lemma <ref>. Each iteration of the for loop in Step <ref> can be computed in time O(ℓ^(k-1) ·ℓ· k) · |I|^O(1), by Lemma <ref>. Hence, as we have |D| = |D(I,)| iterations of the for loop, the running time of the algorithm is bounded by |D| ·O(ℓ^(k-1) ·ℓ· k) · |I|^O(1)≤ (2 ℓ· k ·^-2 +1) ·O(ℓ^(k-1) ·ℓ· k) · |I|^O(1) = O(ℓ^(k-1) ·ℓ +1· k^2 ·^-2) · |I|^O(1). The first inequality follows from (<ref>) and (<ref>). As in this case ℓ^(k-1) ·ℓ· k^2 ·^-2≤ |I|, we have the desired running time. For the cardinality of R, note that by Lemma <ref> (I) ≥α≥(I)/2 ℓ. Thus, by Lemma <ref>, for all r ∈ D, (I,,α,r) is an exchange set satisfying | (I,,α,r) | = O(ℓ^(k-1) ·ℓ· k). Then, |R| ≤ |D| ·O(ℓ^(k-1) ·ℓ· k) ≤ (2 ℓ· k ·^-2 +1) ·O(ℓ^(k-1) ·ℓ· k) = O(ℓ^(k-1) ·ℓ +1· k^2 ·^-2). The second inequality follows from (<ref>) and (<ref>). To conclude, we show that R is a representative set. By Lemma <ref>, for all r ∈ D, it holds that (I,,α,r) is an exchange set for I,,α, and r. Therefore, R ∩_r(α) is an exchange set for I,,α, for all r ∈ D. Hence, by Lemma <ref>, R is a representative set of I and . Proof of Lemma <ref>: The statement of the lemma follows from Lemma <ref>. § AN FPT APPROXIMATION SCHEME In this section we use the representative set constructed by Algorithm <ref> to obtain an FPAS for BM. For the discussion below, fix a BM instance I = (E, , c,p, B,k,ℓ) and an error parameter 0< <1/2. Given the representative set R for I and output by algorithm , we derive an FPAS by exhaustive enumeration over all solutions of I within R. The pseudocode of our FPAS is given in Algorithm <ref>. Given a BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2, Algorithm <ref> returns in time |I|^O(1)·O( ℓ^k^2 ·ℓ· k^2k·^-2k) a solution for I of profit at least (1-2) ·(I). We can now prove our main result. Proof of Lemma <ref>: The proof follows from Lemma <ref> by using in Algorithm <ref> an error parameter ' = /2. For the proof of Lemma <ref>, we use the next auxiliary lemmas. Given a BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2, Algorithm <ref> returns a solution for I of profit at least (1-2) ·(I). By Lemma <ref>, it holds that R = (I,) is a representative set of I and . Therefore, by Definition <ref>, there is a solution S for I such that S ⊆ R, and p(S) ≥ (1-2) ·(I). Since S is a solution for I, it follows that S ∈_k and therefore |S| ≤ k. Thus, there is an iteration of Step <ref> in which F = S, and therefore the set A returned by the algorithm satisfies p(A) ≥ p(S) ≥ (1-2) ·(I). Also, the set A returned by the algorithm must be a solution for I: If A = ∅ the claim trivially follows since ∅ is a solution for I. Otherwise, the value of A has been updated in Step <ref> of Algorithm <ref> to be some set F ⊆ R, but this step is reached only if F is a solution for I. Given a BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2, the running time of Algorithm <ref> is |I|^O(1)·O( ℓ^k^2 ·ℓ· k^2k·^-2k). Let W' = {F ⊆ R |  F ∈_k, c(F) ≤ B} be the solutions considered in Step <ref> of Algorithm <ref>, and let W = {F ⊆ R |  |F| ≤ k}. Observe that the number of iterations of Step <ref> of Algorithm <ref> is bounded by |W|, since W' ⊆ W and for each F ∈ W we can verify in polynomial time if F ∈ W'. Thus, it suffices to upper bound W. By a simple counting argument, we have that |W| ≤ ( |R|+1)^k ≤ O( ( ℓ^(k-1) ·ℓ +1· k^2 ·^-2)^k ) = O( ℓ^k^2 ·ℓ· k^2k·^-2k) The first equality follows from Lemma <ref>. Hence, by (<ref>), the number of iterations of the for loop in Step <ref> is bounded by O( ℓ^k^2 ·ℓ· k^2k·^-2k). In addition, the running time of each iteration is at most |I|^O(1). Finally, the running time of the steps outside the for loop is |I|^O(1), by Lemma <ref>. Hence, the running time of Algorithm <ref> can be bounded by |I|^O(1)·O( ℓ^k^2 ·ℓ· k^2k·^-2k). Proof of Lemma <ref>: The proof follows from Lemmas <ref> and <ref>. § HARDNESS RESULTS In this section we prove Lemma <ref> and Lemma <ref>. In the proof of Lemma <ref>, we use a reduction from the k-subset sum (KSS) problem. The input for KSS is a set X = {x_1, …, x_n} of strictly positive integers and two positive integers T,k>0. We need to decide if there is a subset S ⊆ [n], |S| = k such that ∑_i ∈ S x_i = T, where the problem is parameterized by k. KSS is known to be W[1]-hard <cit.>. Proof of Lemma <ref>: Let U be a KSS instance with the set of numbers E = [n], target value T, and k. We define the following BM instance I = (E, , c,p, B,k,ℓ),. * is a 1-matchoid = {(E,)} such that = 2^E. That is, is a single uniform matroid whose independent sets are all possible subsets of E. * For any i ∈ E = [n] define c(i) = p(i) = x_i+2 ·∑_j ∈ [n] x_j. * Define the budget as B = T+2 k ·∑_j ∈ [n] x_j. If there is a solution for U then there is a solution for I of profit B. Let S ⊆ [n], |S| = k such that ∑_i ∈ S x_i = T. Then, c(S) = p(S) = ∑_i ∈ S( x_i+2 ·∑_j ∈ [n] x_j ) = T+|S| · 2 ·∑_j ∈ [n] x_j = T+2 k·∑_j ∈ [n] x_j = B. By the above, and as S ∈_k, S is also a solution for I of profit exactly B. If there is a solution for I of profit at least B then there is a solution for U. Let F be a solution for I of profit at least B. Then, p(F) = c(F) ≤ B, since F satisfies the budget constraint. As p(F) ≥ B, we conclude that p(F) = c(F) = B. We now show that F is also a solution for U. First, assume towards contradiction that |F| ≠ k. If |F|< k then p(F) = ∑_i ∈ F x_i+|F| · 2 ·∑_j ∈ [n] x_j ≤∑_i ∈ F x_i+(k-1) · 2 ·∑_j ∈ [n] x_j ≤ 2 k·∑_j ∈ [n] x_j < B. We reach a contradiction to (<ref>). Since F is a solution for I it holds that F ∈_k; thus, |F| ≤ k. By the above, |F| = k. Therefore, ∑_i ∈ F x_i = c(F) - |F| · 2 ·∑_j ∈ [n] x_j = c(F) - 2 k ·∑_j ∈ [n] x_j = B - 2 k ·∑_j ∈ [n] x_j = T. By Claims <ref> and <ref>, there is a solution for U if and only if there is a solution for I of profit at least B. Furthermore, the construction of I can be done in polynomial time in the encoding size of U. Hence, an FPT algorithm which finds an optimal solution for I can decide the instance U in FPT time. As KSS is known to be W[1]-hard <cit.>, we conclude that BM is also W[1]-hard. In the proof of <Ref> we use a lower bound on the kernel size of Perfect ℓ-Dimensional Matching (ℓ-PDM), due to Dell and Marx <cit.>. The input for the problem consists of the finite sets U_1,… U_ℓ and E⊆ U_1×…× U_ℓ. Also, we have an ℓ-dimensional matching constraint (E,) to which we refer as the associated set system of the instance (i.e., contains all subsets S ⊆ E such that for any two distinct tuples (e_1,…, e_ℓ), (f_1,…, f_ℓ) ∈ S and every i ∈ [ℓ] it holds that e_i ≠ f_i). The instance is associated also with the parameter k=n/ℓ, where n=∑_j=1^ℓ |U_ℓ|. We refer to |E| as the number of tuples in the instance. The objective is to find S∈ such that |S| = k. Let J=(U_1, … , U_ℓ,E) denote an instance of ℓ-PDM We say J is a “yes” instance if such a set S exists; otherwise, J is a “no” instance. Observe that the parameter k is set such that if S∈ and |S| = k then every element in U_1∪…∪ U_ℓ appears in exactly one of the tuples in S. Let ℓ≥3 and >0. If coNP⊈NP / poly then ℓ-PDM does not have a kernel in which the number of tuples is O(k^ℓ-). Proof of Lemma <ref>: Assume coNP⊈NP / poly. Furthermore, assume towards a contradiction that there is a function f:ℕ→ℕ, constants c_1,c_2, where c_2-c_1<0, and an algorithm  that, given a BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2, finds in time |I|^O(1) a representative set of I and of size O ( f(ℓ) · k^ℓ-c_1·1/^c_2). We use to construct a kernel for 3-PDM. Consider the following kernelization algorithm for 3-PDM. Let J=(U_1,U_2,U_3,E) be the 3-PDM input instance. Define n=|U_1|+|U_2|+|U_3|, ℓ =3, and k=n/ℓ. Furthermore, let (E,) be the set system associated with the instance, and let be an ℓ-matchoid representing the set system (E,). Run  on the BM instance I=(E,,,̧p,B,k,ℓ) with =1/3k, where c(e)=p(e)=1 for all e∈ E and B=k. Let R⊆ E be the output of . Return the 3-PDM instance J'=(U_1,U_2,U_3,R). Since  runs in polynomial time, the above algorithm runs in polynomial time as well. Moreover, as k= n/3 and R⊆ E, it follows that the returned instance can be encoded using O(k^4) bits. Let (R,') be the set system associated with J'. Since R⊆ E, it follows that '⊆. Hence, if there is S∈' such that |S|=k, then S∈ as well. That is, if J' is a “yes” instance, so is J. For the other direction, assume that J is a “yes” instance. That is, there is S∈ such that |S|=k. Then S is a solution for the BM instance I (observe that c(S)=|S|=k=B). Therefore, as R is a representative set of I and =1/3k, there is a solution T for I such that T⊆ R, and p(T) ≥ (1-2)·(I)≥(1-2)· p(S) = (1-2/3k)· p(S)= (1-2/3k)· k = k-2/3. Since the profits are integral we have that |T|=p(T)≥ k. Furthermore |T|≤ k (since T is a solution for I), and thus |T|=k. Since T∈ (as T is a solution for I) and T⊆ R, it trivially holds that T∈'. That is, T∈' and |T|=k. Hence, J' is a “yes” instance. We have showed that the above procedure is indeed a kernelization for 3-PDM. Now, consider the size of R. Since returns a representative set of size O ( f(ℓ) · k^ℓ-c_1·1/^c_2) it follows that |R| = O( f(3) · k^3-c_1· (3k)^c_2) = O( k^3-c_1+c_2). As c_2-c_1<0, we have a contradiction to <Ref>. Thus, for any function f:ℕ→ℕ and constants c_1,c_2 satisfying c_2-c_1<0, there is no algorithm which finds for a given BM instance I = (E, , c,p, B,k,ℓ) and 0<<1/2 a representative set of I and of size O ( f(ℓ) · k^ℓ-c_1·1/^c_2) in time |I|^O(1). § A POLYNOMIAL TIME 1/2·ℓ-APPROXIMATION FOR BM In this section we prove <Ref>. The proof combines an existing approximation algorithm for the unbudgeted version of BM <cit.> with the Lagrangian relaxation technique of <cit.>. As the results in <cit.> are presented in the context of ℓ-extendible set systems, we first define these systems and use a simple argument to show that such systems are generalizations of matchoids. We refer the reader to <cit.> for further details about ℓ-extendible systems. Given a finite set E, ⊆ 2^E, and ℓ∈ℕ, we say that (E,) is an ℓ-extendible system if for every S ∈ and e ∈ E ∖ S there is T ⊆ S, where |T| ≤ℓ, such that (S ∖ T) ∪{ℓ}∈. The next lemma shows that an is in fact an ℓ-extendible set system. For any ℓ∈ℕ_>0 and an ℓ-Matchoid = { M_i = (E_i, _i) }_i ∈ [s] on a set E, it holds that (E,()) is an ℓ-extendible set system. Let S ∈() and e ∈ E ∖ S. As is an ℓ-matchoid, there is H ⊆ [s] of cardinality |H| ≤ℓ such that for all i ∈ [s] ∖ H it holds that e ∉ E_i and for all i ∈ H it holds that e ∈ E_i. Since for all i ∈ H it holds that (E_i,_i) is a matroid, either (S ∩ E_i) ∪{e}∈_i, or there is a_i ∈ S ∩ E_i such that ((S ∩ E_i) ∖{a_i}) ∪{e}∈_i (this follows by repeatedly adding elements from S ∩ E_i to {e} using the exchange property of the matroid (E_i,_i)). Let L = {i ∈ H | (S ∩ E_i) ∪{e}∉_i}. Then, there are |L| elements T = {a_i}_i ∈ L such that for all i ∈ L it holds that ((S ∩ E_i) ∖{a_i}) ∪{e}∈_i and for all i ∈ H ∖ L it holds that (S ∩ E_i) ∪{e}∈_i. Thus, it follows that (S ∖ T) ∪{e}∈() by the definition of a matchoid. Since |T| = |L| ≤ |H| ≤ℓ, we have the statement of the lemma. Proof of Lemma <ref>: Consider the BM problem with no budget constraint (equivalently, B>c(E)) that we call the maximum weight matchoid maximization (MWM) problem. By Lemma <ref>, MWM is a special case of the maximum weight ℓ-extendible system maximization problem, which admits 1/ℓ-approximation <cit.>.[The algorithm of <cit.> can be applied also in the more general setting of ℓ-systems. For more details on such set systems, see, e.g., <cit.>.] Therefore, using a technique of <cit.>, we have the following. There is an algorithm that, given some >0, returns a solution for the BM instance I of profit at least ( 1/ℓ/1/ℓ+1 -) ·(I), and whose running time is |I|^O(1)· O(log(^-1)). Now, we can set = 1/ℓ/1/ℓ+1-1/2ℓ; then, the above algorithm has a running time |I|^O(1), since ^-1 is polynomial in ℓ and ℓ≤ |I|. Moreover, the algorithm returns a solution S for I, such that (I) ≥ p(S) ≥( 1/ℓ/1/ℓ+1 -) ·(I) = 1/2ℓ·(I). To conclude, we define the algorithm which returns α = p(S). By the above discussion, (I) ≥α≥(I)/2ℓ, and the running time of is |I|^O(1). § DISCUSSION In this paper we present an FPT-approximation scheme (FPAS) for the budgeted ℓ-matchoid problem (BM). As special cases, this yields FPAS for the budgeted ℓ-dimensional matching problem (BDM) and the budgeted ℓ-matroid intersection problem (BMI). While the unbudgeted version of BM has been studied earlier from parameterized viewpoint, the budgeted version is studied here for the first time. We show that BM parameterized by the solution size is W[1]-hard already with a degenerate matroid constraint (Lemma <ref>); thus, an exact FPT time algorithm is unlikely to exist. Furthermore, the special case of unbudgeted ℓ-dimensional matching problem is APX-hard, already for ℓ=3, implying that PTAS for this problem is also unlikely to exist. These hardness results motivated the development of an FPT-approximation scheme for BM. Our FPAS relies on the notion of representative set - a small cardinality subset of the ground set of the original instance which preserves the optimum value up to a small factor. We note that representative sets are not lossy kernels <cit.> as BM is defined in an oracle model; thus, the definitions of kernels or lossy kernels do not apply to our problem. Nevertheless, for some variants of BM in which the input is given explicitly (for instance, this is possible for BDM) our construction of representative sets can be used to obtain an approximate kernelization scheme. Our results also include a lower bound on the minimum possible size of a representative set for BM which can be computed in polynomial time (<Ref>). The lower bound is based on the special case of the budgeted ℓ-dimensional matching problem (BDM). We note that there is a significant gap between the size of the representative sets found in this paper and the lower bound. This suggests the following questions for future work. * Is there a representative set for the special case of BDM whose size matches the lower bound given in <Ref>? * Can the generic structure of ℓ-matchoids be used to derive an improved lower bound on the size of a representative set for general BM instances? The budgeted ℓ-matchoid problem can be naturally generalized to the d-budgeted ℓ-matchoid problem (d-BM). In the d-budgeted version, both the costs and the budget are replaced by d-dimensional vectors, for some constant d≥ 2. A subset of elements is feasible if it is an independent set of the ℓ-matchoid, and the total cost of the elements in each dimension is bounded by the budget in this dimension. The problem is a generalization of the d-dimensional knapsack problem (d-KP), the special case of d-BM in which the feasible sets of the matchoid are all subsets of E. A PTAS for d-KP was first given in <cit.>, and the existence of an efficient polynomial time approximation scheme was ruled out in <cit.>. PTASs for the special cases of d-BM in which the matchoid is a single matroid, matroid intesection or a matching constraint were given in <cit.>. It is likely that the lower bound in <cit.> can be used also to rule out the existence of an FPAS for d-BM. However, the question whether d-BM admits a (1-)-approximation in time O( f(k+ℓ) · n^g()), for some functions f and g, remains open.
http://arxiv.org/abs/2307.05109v1
20230711083612
Conformalization of Sparse Generalized Linear Models
[ "Etash Kumar Guha", "Eugene Ndiaye", "Xiaoming Huo" ]
cs.LG
[ "cs.LG", "stat.ML" ]
tightcenter [name=Theorem,numberwithin=section]thm [name=Lemma,numberwithin=section]lem [name=Definition,numberwithin=section]defi [name=Assumption,numberwithin=section]asmptn Conformalization of Sparse Generalized Linear Models [ Conformalization of Sparse Generalized Linear Models Etash Kumar Guhaschcs Eugene Ndiayeapple Xiaoming Huoschie schcsCollege of Computing, Georgia Institute of Technology, Atlanta, GA, USA schieH. Milton Stewart School of Industrial and Systems Engineer- ing, Georgia Institute of Technology, Atlanta, GA, USA apple Apple (Work partly done while at Georgia Tech) Etash [email protected] Conformal Prediction, Linear Models, Sparsity 0.3in ] Given a sequence of observable variables {(x_1, y_1), …, (x_n, y_n)}, the conformal prediction method estimates a confidence set for y_n+1 given x_n+1 that is valid for any finite sample size by merely assuming that the joint distribution of the data is permutation invariant. Although attractive, computing such a set is computationally infeasible in most regression problems. Indeed, in these cases, the unknown variable y_n+1 can take an infinite number of possible candidate values, and generating conformal sets requires retraining a predictive model for each candidate. In this paper, we focus on a sparse linear model with only a subset of variables for prediction and use numerical continuation techniques to approximate the solution path efficiently. The critical property we exploit is that the set of selected variables is invariant under a small perturbation of the input data. Therefore, it is sufficient to enumerate and refit the model only at the change points of the set of active features and smoothly interpolate the rest of the solution via a Predictor-Corrector mechanism. We show how our path-following algorithm accurately approximates conformal prediction sets and illustrate its performance using synthetic and real data examples. =-1 § INTRODUCTION Modern statistical learning algorithms perform remarkably well in predicting an object based on its observed characteristics. In terms of AI safety, it is essential to quantify the uncertainty of their predictions. More precisely, after observing a finite sequence of data _n = {(x_1, y_1), …, (x_n, y_n)}, it is interesting to analyze to what extent one can build a confidence set for the next observation y_n+1 given x_n+1. =-1 A classical approach is to adjust a prediction model μ__n on the observed data _n and consider an interval centered around the prediction of y_n+1 when the fitted model receives x_n+1 as new input, using μ__n(x_n+1). We calibrate the confidence interval to satisfy a 100(1-α)% confidence by considering, for any level α in (0, 1), the set {z : |z - μ__n(x_n+1)| ≤ Q_n(1-α)}, where Q_n(1 - α) is the (1-α)-quantile of the empirical cumulative distribution function of the fitted residuals |y_i - μ__n(x_i)| for indices i in {1, …, n}. If the fitted model is close to the exact value, this method is approximately valid as n goes to infinity.=-1 Alternatively, conformal prediction is a versatile and simple method introduced in <cit.> that provides a finite sample and distribution free 100(1 - α)% confidence region for the predicted object based on past observations. The main idea is to follow the construction of the confidence set in <Ref> by using candidate values for y_n+1. Since the true y_n+1 is not given in the observed dataset _n, one can instead learn a predictive model μ__n+1(z) on an augmented database _n+1(z) = _n ∪ (x_n+1, z) , where a candidate z replaces the unknown response y_n+1. We can, therefore, define a prediction loss for each observation and rank them. A candidate z will be considered conformal or typical if the rank of its loss is sufficiently small. The conformal prediction set will simply contain the most typical z as a confidence set for y_n+1. More formally, the conformal prediction set is obtained as {z : |z - μ__n+1(z)(x_n+1)| ≤ Q_n+1(1 - α, z)}, where Q_n+1(1 - α, z) is the (1-α)-quantile of the empirical cumulative distribution function of the refitted residuals, e.g., |y_i(z) - μ__n+1(z)(x_i)| for indices i in {1, …, n+1} and y(z)=(y_1, …, y_n, z). This method benefits from a strong coverage guarantee without any assumption on the distribution, including finite sample size n; see <Ref>. The conformal prediction approach has been applied for designing uncertainty sets in active learning <cit.>, anomaly detection <cit.>, few-shot learning <cit.>, time series <cit.>, or to infer the performance guarantee for statistical learning algorithms <cit.>. We refer to the extensive reviews in <cit.> for other applications to artificial intelligence. Despite its attractive properties, the computation of conformal prediction sets traditionally requires fitting a model μ__n+1(z) for each possible augmented dataset _n+1(z) corresponding to each possible candidate z for y_n+1. The number of possible candidates is infinite in a regression setting where an object can take an uncountable number of possible values. Therefore, the computation of conformal prediction is generally infeasible without additional structural assumptions on the underlying model fit. Otherwise, the calculation costs remain high or impossible. While many algorithms encounter this problem of fitting many models under alterations to the regularization parameter λ <cit.>, to our knowledge, such algorithms do not exist for general loss functions under changes to the dataset without high computation cost. We can avoid the central issue of refitting the model many times by using the structural assumptions given by the setting of General Linear Models with ℓ_1 regularization. =-1 Contributions We generalize linear homotopy approaches from quadratic loss to a broader class of nonlinear loss functions using numerical continuation to efficiently trace a piecewise smooth solution path. Overall, we propose a homotopy drawing algorithm that efficiently keeps track of the weights over the space of possible candidates using the sparsity induced by the ℓ_1 regularization. We develop an efficient Conformal Prediction algorithm for sparse generalized linear models from this homotopy algorithm. Additionally, using numerical continuation and the patterns in the sparsity of the weights, we relinquish the expensive necessity of retraining the model many times from random initialization. Furthermore, we provide a primal prediction step that significantly reduces the number of iterations needed to obtain an approximation at high precision. We illustrate the performance of our algorithm as a homotopy drawer and a conformal set generator using Quadratic, Asymmetric and Robust Loss functions with ℓ_1 regularization. =-1 Related Works Our methodology uses numerical continuation (also called homotopy) to generate a path of solutions. Such continuation techniques have been previously used when the objective function is differentiable <cit.>, <cit.> for support vector machine, <cit.> for logistic regression, and more general loss functions regularized with the ℓ_1 norm in <cit.>. However, the latter focus on the regularization path and plot the solution curve as the regularization parameter λ varies. To our knowledge, there does not exist work generating the solution curve as the label z varies in y(z) for general loss functions. In the setting we consider, we recall that it is the response vector that is parameterized as y(z) = (y_1, …, y_n, z) for a real value z; for which <cit.> and <cit.> proposed a homotopy algorithm when the loss function is quadratic. However, such algorithms do not work for general nonlinear loss functions; our algorithm extends these works to such nonlinear loss functions. For such loss functions, works such as <cit.> aim to approximate the homotopy only enough to generate the conformal prediction set. However, this work suffers much worse as increasing accuracy is required when drawing the homotopy and cannot, for example, recover the path with quadratic loss, for which an exact homotopy algorithm is known. NotationFor a nonzero integer n, we denote [n] to be the set {1, ⋯, n}. Furthermore, the row-wise feature matrix is X = [x_1, ⋯, x_n+1]^⊤ such that X ∈ℝ^(n+1) × p. We use the notation X_A to refer to the sub-matrix of X assembled from the columns with indices in A. If we need to do so for only one index j, where j ∈ [p], we use X_j. For brevity, we will define σ_max(X_A) as the maximum singular value of X_A, i.e. σ_max(X_A) = X_A_2. We also similarly define σ_min(X_A). If a function β(z) returns a vector for some input z, we can index that output vector by β_A(z), where A ⊂ [p] or β_j(z) where j ∈ [p]. Moreover, given a function f(x_i, x_j) of two variables, we denote the gradient of that function as ∂ f. Furthermore, we use the simple notation ∂_i,j,kf = ∂^3 f/∂ x_i ∂ x_j∂ x_k where i,j,k ∈ [2]. We denote the smallest integer no less than a real value r as ⌈ r ⌉. We denote by Q_n+1(1 - α), the (1 - α)-quantile of a real valued sequence (U_i)_i ∈ [n + 1], defined as the variable Q_n+1(1 - α) = U_(⌈ (n+1)(1-α) ⌉), where U_(i) are the i-th order statistics. For k in [n+1], the rank of U_k among U_1, ⋯, U_n+1 is defined as (U_k) = ∑_i=1^n+11_U_i ≤ U_k. § SPARSE GENERALIZED LINEAR MODELS By definition of the conformal prediction set in <Ref>, one needs to consider an augmented dataset _n+1(z) for any possible replacement of the target variable y_n+1 by a real value z. This implies the computation of the whole path z ↦μ__n+1(z)(x_n+1) as well as the path of scores and quantiles. However, it is generally difficult to achieve. We focus on the Generalized Linear Model (GLM) regularized with an ℓ_1 norm that promotes sparsity of the model parameter. For a fixed z ∈, the weight β^⋆(z) is defined as a solution to the following optimization problem β^⋆(z) ∈_β∈ℝ^p f(y(z),Xβ) + λβ_1 . where the data fitting term f(y(z), y^⋆(z)) is a non negative loss function between a prediction y^⋆(z) and the augmented vector of labels y(z) = (y_1, ⋯, y_n, z). We parameterize a linear prediction as y_i^⋆ = x_i^⊤β^⋆(z) and the empirical loss is f(y(z), y^⋆(z)) = ∑_i=1^nℓ(y_i, y_i^⋆(z)) + ℓ(z, y^⋆_n+1(z)) . There are many examples of cost functions in the literature. A popular example is the power norm regression, where ℓ(a, b) = |a - b|^q. When q=2, this corresponds to the classical linear regression. The cases where q = [1, 2) are frequent in robust statistics, where the case q = 1 is known as the least absolute deviation. One can also consider the loss function <cit.> which provides an loss function ℓ(a, b) = exp(γ(a - b)) - γ(a - b) - 1, for γ≠ 0.=-1 §.§ Assumptions and Properties We first describe the structure of the optimal solution β^⋆(z) for a candidate z. A solution to the optimization problem from <Ref> must obey the first-order optimality condition. Analyzing the solution reveals a set of weights in β^⋆(z) whose value is 0 and, thus, does not contribute to the inference. This is a crucial property of ℓ_1 regularization. []lemuniquesolution A vector β^⋆(z) ∈ℝ^p is optimal for <Ref> if and only if for y^⋆(z) = Xβ^⋆(z), it holds -X^⊤∂_2 f(y(z),y^⋆(z)) = λ v(z) , where v(z) belongs to the subdifferential of the ℓ_1 norm at β^⋆(z) ∀ j ∈{1, …, p}, we have v_j(z) ∈{(β_j^⋆(z))} if β_j^⋆(z) ≠ 0 , [-1, 1] if β_j^⋆(z) = 0 . Within this lemma, we wish to formally distinguish between nonzero weights and zero weights, as this helps determine the value of v_j(z), per <Ref>. []defiactiveset We define our active set at a point z as A(z) = {j ∈ [p]: |X_j^⊤∂_2 f(y(z), y^⋆(z))| = λ}. The active set contains at least all the indices of the optimal solution that are guaranteed to be nonzero. We will denote A=A(z) if there is no ambiguity. The following result provides sufficient conditions to ensure uniqueness of the solution path, for any z, there exists a single optimal solution β^⋆(z) for Problem <ref>. []lemuniquepath For all z, we assume that the matrix X_A(z) is full rank and that the loss function f is strictly convex. With these two assumptions, for all candidates z, only one unique optimal solution β^⋆(z) exists. Thus, the solution path z ↦β^⋆(z) is well defined. In the following, for simplicity of the presentation of the algorithms, we will add the classical qualification condition that the active set coincides with the support of the solution for any candidate z where the path is differentiable. § EFFICIENT COMPUTATION OF THE SOLUTION PATH We aim to finely approximate the function β^⋆(z) as β̂(z) across all candidates z. The initial and main observation is that the active set map (resp. solution path) is piecewise constant (resp. smooth). That is to say, That is to say, the variable selected by the ℓ_1 penalty is invariant with respect to small perturbation of the input data. Building on this, the path drawing algorithm is a combination of finding points where the active set changes occur and estimating the optimal solution, leveraging the regularity of the loss f. We have two situations for a change in the active set: * A nonzero variable becomes zero ∃ j ∈ A(z) s.t. β_j^⋆(z) ≠ 0 and β_j^⋆(z_j^out) = 0 . * A zero variable becomes nonzero ∃ j ∈ A^c(z) s.t. |X_j^⊤∂_2 f(y(z_j^in), y^⋆ (z_j^in))| = λ. Here, z_j^out and z_j^in are the estimated points where variable j could leave or join the active set, respectively. With decreasing input z, the next change point occurs at z_next(z) = max(max_j ∈ A(z) z_j^out, max_j ∈ A^c(z) z_j^in) . Here, z_next(z) is the function that finds where the active set changes after point z. The set of change points are called kinks of the path because they correspond to the non-differentiable points of the solution path z ↦β^⋆(z). Core difficulties are that f can be highly nonlinear, and the optimal weights β^*(z^+) at an arbitrary point z^+ cannot be efficiently computed for many loss functions. To alleviate this, our algorithm sequentially creates a linearized version of β_A^⋆(z^+) called β̃_A(z^+) (<Ref>) in order to estimate the active set changes (<Ref> and <Ref>). Given a point of active set change z_t, we can manually correct β̃_A(z_t) into β̂_A(z_t) so that β̂_A(z_t) ≈β_A^⋆(z_t) up to a negligible optimization error ϵ_tol using any appropriate solver (<Ref>). It then approximates β_A^+^⋆(z^+), where A^+ is the new active set, repeating these steps until the stopping point is reached. We detail the entire pipeline in <Ref> and illustrate how our approximated solution path deviates from the exact one for different loss functions in <Ref>. =-1 §.§ Solution Estimation We wish to approximate β_A^⋆(z^+) for a candidate z^+ smaller than the most recently found kink z_t where A(z^+) = A(z_t). To start, we will assume access to the corrected (up to negligible error) weights β̂_A(z_t) at the previous kink z_t. We can use a local linearization of the solution path as β̃_A(z^+) = β̂_A(z_t) + β̂_A^' (z_t) × (z^+ - z_t) , where, β̂_A^'(z_t) is our approximation of the true slope ∂β_A^⋆/∂ z(z_t), which we do not have access to. To understand this term, we follow <cit.> to define H(y(z), β_A^⋆(z)) = X_A^⊤∂_2 f(y(z),y^⋆(z)) + λ v_A , From the Optimality Condition in <Ref>, it holds H(y(z), β_A^⋆(z)) = 0 ⟹∂ H/∂ z = 0 . By the implicit function theorem and the chain rule, we have ∂β_A^⋆/∂ z = -(∂ H/∂β)^-1∂ H/∂ y∂ y/∂ z ∂ H/∂β = X_A^⊤∂_2, 2 f(y(z), y^⋆(z))X_A ∂ H/∂ y = X_A^⊤∂_2, 1 f(y(z), y^⋆(z)) ∂ y/∂ z = (0, …, 0, 1)^⊤. To compute an approximation of ∂β_A^⋆/∂ z(z_t), we use a plug-in approach and only replace the (unknown) exact value of y^⋆(z_t) = Xβ^⋆(z_t) with the approximate ŷ(z_t) = Xβ̂(z_t), yielding β̂_A^'(z_t). Notably, we get an equation for β̃_A(z^+), which is efficient to compute given y(z^+). As a reminder, the loss function f differentiates this algorithm from existing path-finding algorithms tailored for changes in the hyperparameter λ. If f is the Quadratic loss function, we recover the path-finding algorithm from <cit.>. A completely different homotopy will be generated if it is another loss function. =-1 §.§ Active Set Updates We have to track the changes that may occur in the active sets along the path sequentially depending on whether the variable leaves or enters the active set. We will compute our path restricted in the interval [z_min, z_max] where z_min = min(y_1, …, y_n) and z_max = max(y_1, …, y_n). For sufficiently large sample size n, any point z outside this interval has a very low probability of being in the conformal set since it is an outlier of a label; see justification in <Ref>. For simplicity, we reiterate that we know the corrected β̂(z_t) at the most recent kink z_t approximating β^*(z_t) up to error ϵ_tol and the active set of weights A(z_t). We estimate the kinks by following <Ref> and replacing the exact solution β^⋆(z_t) by β̃(z_t) in <Ref>. As such, we will iteratively set z_t+1 = z_next(z_t) as the next change point following <Ref>. Leaving the active set At the point, where a nonzero variable becomes zero, we know that by <Ref>, we have a closed form approximation of β_A^⋆(z^+) given β_A(z_t). Therefore, for a feature index j ∈ A, we have a closed-form approximation for β_j^⋆(z^+) in terms of z^+, which we can compute efficiently. Thus, from <Ref>, j leaving the active set occurs at β_j^⋆(z^+) = 0 implies a kink occurs at z^+ when 0 ≈β̃_j(z^+) defined in the R.H.S. of <Ref>; which is easily solvable in closed-form. Thus, for an active variable j with nonvanishing gradient β̂_j^'(ẑ_t) ≠ 0, we define z_j, t+1^out = z_t - β̂_j(z_t)/β̂_j^'( z_t), and define z_j, t+1^out = -∞ otherwise. We remind the reader that β̂_j^'(z_t) is our approximation of the true slope ∂β_j^⋆/∂ z(z_t) from <Ref>. Joining the active set At the point where a variable becomes nonzero, we know from <Ref> that for any inactive variable j ∈ A^c that joins the active set | X_j^⊤∂_2 f(y(z^+), X_A^+β_A^+^⋆(z^+))| = λ where A^+ = A ∪{j}. However, given that we are searching for a point z^+ where the active sets shift from A to A^+, at point z^+, β_j^⋆(z^+) is roughly 0 since it is the first point where β_j^⋆(z^+) becomes nonzero. Therefore, given this information, the prediction X_jβ_j^⋆(z^+) = 0 where z^+ is a kink. Using this idea, we can provide the equivalence X_A^+β_A^+^⋆(z^+) = X_Aβ_A^⋆(z^+) = y^⋆(z^+) . This equivalence is useful as we know how to approximate β_A^⋆(z^+), and therefore y^⋆(z^+), efficiently from <Ref>. Therefore, the j-th variable must join the active set at approximately z^+ such that ℐ_j(z^+) = 0 where ℐ_j(z^+) = | X_j^⊤∂_2 f(y(z^+), y^⋆(z^+))| - λ. We also leverage a plug-in estimate of <Ref> by replacing y^⋆(·) by ŷ(·). We could use a root-finding function to efficiently find the roots of the function ℐ_j(z^+) where the kink may lie. However, we seek a closed form as in <Ref> to make finding the roots of ℐ_j(z^+) more efficient. We do this via linearization again. §.§ Approximation of ∂_2 f(y(z^+), y^⋆(z^+)) While β̃_j(z^+) is linear in z^+, giving way to an explicit solution for z^+, this property does not hold for ℐ_j(z^+) in <Ref>. To achieve such a form, we need to linearize further ∂_2 f(y(z^+), y^⋆(z^+)). To simplify, we denote f(y(z), y^⋆(z)) = f∘ζ(z) where ζ(z) = (y(z), y^⋆(z)) , and approximate its gradient ∂_2 f∘ζ(z^+) as ∂_2 f∘ζ(z) + ∂_2,1 f∘ζ(z)^⊤Δ y + ∂_2,2 f∘ζ(z)^⊤Δ y^⋆ where Δ y = y(z^+) - y(z) and Δ y^⋆ = y^⋆(z^+) - y^⋆(z). We still have that <Ref> can be nonlinear since Δ y^⋆ can be nonlinear in z^+. To alleviate this, we leverage the local approximation of the solution path in <Ref> and the plug-in replacement of ∂β_A^⋆/∂ z with β̂_A^'. As such, we can estimate the root of ℐ_j(z^+) and sequentially define the next point where the jth variable becomes active. To simplify the expression, we set ζ̂(z) = (y(z), ŷ(z)) and g(z_t) = [∂_2 1 f∘ζ̂(z_t)]_n+1 + ∂_2,2f∘ζ̂(z_t)^⊤ X_A β̂_A^'(z_t) . A zero variable j is estimated to become nonzero at z_j, t+1^in = z_t + -X_j^⊤∂_2 f ∘ζ (z_t) ±λ/ X_j^⊤ g(z_t) , The detailed computations are provided in <Ref>. Note that when the denominator g(z_t) is zero, we set z_j, t+1^in = -∞. Finally, the next kink is estimated as z_t+1 = max( max_j ∈ A(z_t) z_j, t+1^out, max_j ∈ A^c(z_t) z_j, t+1^in) . §.§ Solution Updates Our active set change point finder obtains the next kink z_t+1 by tracking all variables in the optimal solution to see whether or not it cancels out after z_t. However, our kink-finding tool requires exact knowledge of β̂(z_t), as in <Ref>. To find the next kink, we, therefore, need to know β̂(z_t+1). To ensure that our linearized version β̃_A(z_t+1) is close enough to the exact solution β_A^⋆(z_t+1), we manually correct our linearized weights β̃_A(z_t+1), creating our β̂_A(z_t+1). We use the Predictor-Corrector strategy described below <cit.>. =-1 Predictor To initialize the solving process for β̂(z_t+1), we first provide our linearized version β̃(z_t+1) from <Ref> as a warm start initialization. This vastly improves the computation time of our corrector step here after. Corrector The solution obtained in the warm start often has a reasonably small approximation error. For example, in the case of the Quadratic loss, this warm start is exact and correction is unnecessary. However, it generally is an imprecise estimate of the exact solution. To overcome this, we use an additional corrector step using an iterative solver, such as proximal gradient descent initialized with the predictor output, or more advanced solvers such as <cit.> or <cit.>. This takes our linearized weight estimates of β̃(z_t+1) and outputs our approximate weights β̂(z_t+1) ≈β^*(z_t+1) up to error ϵ_tol which is a hyperparameter for our corrector. Finally, we can summarize our approximation of the homotopy as the following. β̂(z)= β̃(z) if z ∉{z_1, …, z_t} β̃^⋆(z) if z ∈{z_1, …, z_t} (output of corrector) For point z that is not a kink, we form our estiamte weights simply through the linearization. Otherwise, we can use the output of the corrector as our estimates. § CONFORMAL PREDICTION FOR SPARSE GLM Given a homotopy for specific data and loss function, computing the Conformal Prediction set relies on a simple calculation using the homotopy. Meanwhile, the primary tool for proving its validity is that the rank of one variable among an exchangeable and identically distributed sequence follows a (sub)-uniform distribution <cit.>.=-1 This idea of rank helps construct distribution-free confidence intervals. We can estimate the conformity of a given candidate z by calculating its prediction loss | z - y^⋆_n+1(z)| and compute its rank relative to the losses of the other datapoints. The candidate will be considered conformal if the rank of its loss is sufficiently small. Let us define the conformity measure for _n+1(z) as E_i(z) = |y_i - y_i^⋆(z)|, ∀ i ∈ [n] , E_n+1(z) = |z - y_n+1^⋆(z)| . The main idea for constructing a conformal confidence set is to consider the conformity of a candidate point z measured as=-1 π(z) = 1 - 1/n+1(E_n+1(z)) . The conformal prediction set will collect the most conformal z as a confidence set for y_n+1, gathers all the real values z such that π(z) ≥α. This condition occurs if and only if the score E_n+1(z) is ranked no higher than ⌈(n+1)(1 - α)⌉, among the sequence {E_i(z)}_i ∈ [n + 1], {z ∈: E_n+1(z) ≤ Q_n+1(1 - α, z)}, which is exactly the conformal set defined in <Ref>. We need to calculate the piecewise constant function z ↦π(z) to compute a conformal set. Fortunately, our framework directly sheds light on the computation of this value over the range space. Access to the homotopy, as well as the kinks, yields an efficient methodology for calculating the conformal prediction set over the range space. Once can readily use a root-finding approach <cit.> but it requires the assumption that the conformal set is an interval. Instead, we do so by tracking where changes in this set occur. Naturally, changes in the rank function only occur when the error of one example surpasses or goes below that of the error of the last example. Formally, this can be seen when | y_i(z) - y_i^⋆(z)| = | y_n+1(z) - y_n+1^⋆(z))|. We will look between the two kinks to efficiently find points satisfying <Ref>. For a point z between two kinks, we can efficiently estimate y^⋆(z). Indeed, given a point z is between two kinks z_t and z_t+1 with an active set A, we can use <Ref> to estimate the quantity y(z) - y^⋆(z) as ℱ(z) = y(z) - ŷ (z_t) + Xβ̂_A^'(z_t) × (z - z_t) , where β̂_A(z_t) is stored from the corrector step at the kink z_t. Given that this value is linear in z, we can form a closed-form explicit approximation for what z solves <Ref>. Therefore, we can look for where the π(z) value changes between every sequential pair of kinks. To find the conformal set, we track the changes π(z) and recompute it along each root of <Ref>, yielding an efficient methodology to compute π(z), and, therefore, the conformal set along the space of possible y_n+1 values.=-1 § THEORETICAL ANALYSIS To understand where and how our algorithm fails, we provide an upper bound on the pointwise error of our algorithm. The error is mainly accumulated in the linearizations we use for estimating the solution and gradient of the loss. To form such bounds, we need assumptions on the regularity of the loss function f itself and on the sequence of design matrix restricted on the active sets along the path. Namely, we will see that the derivatives of the loss function is bounded.=-1 []lemboundderivatives The second derivatives, assumed to be continuous, of the loss function f are locally bounded by data-dependent constants. Indeed, for any z ∈ [z_min, z_max], we have β^⋆(z) ∈ℬ_·_1(0, R/λ) where R = z ∈ [z_min, z_max]max f(y(z), 0) . By Weierstrass theorem, for any i,j ∈ [2], we have ∂_i,j f ∘ζ(z)_2≤ν_f . []lemstrongconvexityalongpath We assume that the loss f is μ_f-strongly convex μ_f inf_ζ≤ B∂_2, 2 f ∘ζ(z) > 0, where B is provided in the appendix. Thus, for any z ∈ [z_min, z_max], the maximum singular value of the inverse of the matrix ∂ H/∂β = X_A^⊤∂_2, 2 f∘ζ(z) X_A is upper bounded as ∂ H/∂β^-1_2≤1/σ_min^2(X_A) ×μ_f. With these two lemmas, we can form our error bounds. []thmwholeerror The error between our linearized weights β̃(z^+) and the true weights β^⋆(z^+) is upper bounded by β̃(z^+) - β^⋆(z^+)_2 ≤ϵ_tol + L ν_f/μ_f× |z^+ - z_t| . where L = σ_max(X_A(z_t))/σ_min^2(X_A(z_t)) + sup_z ∈ [z^+, z_t]σ_max(X_A(z))/σ_min^2(X_A(z)), and z_t is the prior kink of z^+. []thmuppboundg The estimation error is upper bounded by ∂_2 f ∘ζ(z^+) - ∂_2 f∘ζ̂(z^+)_2 ≤ K [ ϵ_tol + Lν_f/μ_f|z^+ - z_t|] where K=ν_f ×σ_max(X_A). § NUMERICAL EXPERIMENTS Our central claim is twofold. Our method efficiently and accurately generates the homotopy over general loss functions. Our method also efficiently and accurately generates conformal sets over general loss functions. We demonstrate these two claims over different datasets and loss functions. For reproducibility, our implementation is at <github.com/EtashGuha/sparse_conformal>. Datasets We use four datasets to illustrate the performance of our algorithm. The first three are real datasets sourced from <cit.>. The Diabetes dataset is a regression dataset with 20 features and 442 samples. Additionally, we use the well-known regression dataset from <cit.> denoted as Friedman1, which has 10 features and 100 samples. We also use the multivariate dataset denoted Friedman2 from <cit.>, which has 100 samples and 4 features. These datasets are used to demonstrate the capabilities of our algorithm on real datasets. We also generate regression problems synthetically. We sample the data and labels from a uniform distribution between [-1,1]. We also divide by the standard deviation to normalize the dataset. We generate two different synthetic datasets, one normal-sized dataset, denoted with 100 samples and 100 features, and a larger dataset, denoted with 1000 features and 20 samples. This larger dataset is intended to display our algorithm's complexity in terms of the number of features. These datasets represent a reasonable range of regression problems usable for our experiments. =-1 Baselines To form a baseline for our algorithm, we use several baselines. This baseline is the most naive conformal prediction algorithm. For Grid algorithms, the algorithm selects 100 potential candidates evenly across the range of possible candidates. It uses the primal corrector at each point to calculate the weights to form the homotopy. A more sophisticated conformal prediction and homotopy generating algorithm is the Approximate homotopy from <cit.>, which leverages loss function smoothness to track violations (up to a prescribed error tolerance) of the optimality condition along the path.=-1 §.§ Homotopy Experiments To test our algorithm in terms of homotopy generation, we measure our algorithm's accuracy and efficacy against different baselines across different loss functions. For all baselines and our algorithm, we use Proximal Gradient Descent for Lasso Loss and for Robust and Asymmetric as Primal Correctors. Precisely, we measure the negative logarithm of the gap between primal values of the calculated β̂ values and a ground truth baseline. We measure this gap across many possible z values and take the average. The ground truth baseline is a Grid-based homotopy, where we compute the homotopy iteratively along a find grid of candidates. Given that we apply the negative logarithm to the primal gap, the larger the value reported, the smaller the true error term and the better the algorithm's performance. Moreover, we report the time taken in seconds required to form the homotopy. Our experiments cover the Lasso, Robust, and Asymmetric functions across all the datasets. =-1 We report our results in <Ref> and <Ref>. We shorten Synthetic to and Approximate to for brevity. As evident, we see a significant decrease in time used over Approximate Homotopy for most applications of the Lasso Loss with a significant increase in accuracy. On the largest dataset for Lasso Loss, our algorithm gets similar accuracy and is much more efficient. Furthermore, we report similar primal gaps for both ours and the approximate homotopy algorithms on Robust and Asymmetric losses. However, we achieve significant time improvements. Notably, on the Diabetes and Large dataset for Asymmetric loss and the Synthetic and Large dataset for both Asymmetric and Robust losses, we report an almost 50% reduction in the time taken to achieve a similar error. Overall, across all loss types and datasets, we either achieve similar or better errors with the same or less time relative to the standard Approximate Homotopy, demonstrating the capability of our algorithm to efficiently and accurately generate the homotopy. To illustrate the accuracy of our algorithm, we plot the optimization error gap over the space of all z ∈ [z_min, z_max] for all three loss functions and four datasets. We report the figures in <Ref>. Notably, we see that on <Ref>, we achieve all losses better than 10^-4. On other figures, all objective errors are bounded by 10^-2. Our application of Lasso and Robust over all datasets achieves near 0 objective error over the entire pass. §.§ Conformal Prediction Experiments It is a natural question whether this improvement in the generation of the homotopy function yields a strong conformal set generation algorithm. We demonstrate this both visually and empirically. We draw the π(z) function for visual verification over all four datasets and three loss functions using our algorithm. To form a baseline, we use the Grid algorithm. This algorithm is a ground truth to which we compare our π(z) function. For empirical verification, we compare coverage, length, and the time of our method vs. several important baselines. Namely, we use the Grid method, Approximate homotopy from <cit.>, the Oracle methodology, which has access to the true value of y_n+1 to form its conformal interval, and the Split methodology, which uses a calibration dataset to calibrate the conformal values predicted but loses statistical validity. Visual Results We report the figures in <Ref>. As is evident over all loss functions and datasets, our estimated π(z) roughly traces the true π(z) generated by the discretized searching algorithm. While on particular examples, notably Figures <ref>, <ref>, and <ref>, the trace is less accurate than the others. However, the error is within a reasonable range to achieve the desired coverage and length guarantees. We also report similar experiments for the Lasso loss, but we mention these in <Ref> since our method is exact for the Lasso loss. We demonstrate that our homotopy drawing algorithm yields an efficient and accurate methodology for generating conformal sets for general loss functions as tested on several datasets. =-1 Empirical Results We report our empirical results in <Ref>, <Ref>, and <Ref>. We can see that most methods maintain strong coverage guarantees over all datasets. For our experiments, we used α = 0.1, and most of our results hover around that level of coverage. Moreover, in <Ref>, we see that except for Oracle, across several loss functions and datasets, our algorithm achieves the smallest length. The Oracle, however, consistently has the best length due to its knowledge of the true y_n+1. Also, our algorithm is the fastest over all homotopy methods but slower than Split and Oracle, as seen in <Ref>. Therefore, our experiments indicate that our Conformal Prediction Algorithm is competitive in all coverage, length, and time measures. =-1 § CONCLUSION Our results demonstrate that we can efficiently and accurately draw the homotopy of the typicalness function of a model over several loss functions via exploiting the sparsity structure of the Linear Models with ℓ_1 regularization. Furthermore, we achieve explicit closed-form equations to model the behavior of this homotopy. Previous results mainly focus on quadratic loss functions or ignore the structure of the regularization altogether. Our framework, instead, captures this information and uses it to improve the accuracy of our final results. Several avenues for extending our research remain interesting. Spline instead of linear interpolation may yield improved accuracy for different loss functions. Additionally, smoothing at the kinks may reduce the algorithm's sensitivity to the primal corrector's results. Furthermore, we would like to expand our work to non-convex settings such as deep learning in future works. icml2023 § ADDITIONAL VISUALIZATIONS We have provided two extra visualizations for the reader's understanding. We have provided figures of the homotopy for different loss functions and what the conformality function π(z) looks like for the Quadratic Loss function on both our real and synthetic datasets. Homotopy Visualizations We run our homotopy generation algorithm over Quadratic Loss, Robust, and Asymmetric Loss functions over several low-dimensional synthetic examples. As we can see in <Ref>, for the Quadratic Loss, our homotopy algorithm perfectly matches that of the Grid baseline since our algorithm captures the Quadratic Loss homotopy from <cit.> exactly. Moreover, in <Ref> and <Ref>, we see that our algorithm very closely identifies the homotopy of the Grid algorithm. In the Robust and Asymmetric cases, the linearization causes a slight miss in the kink, but the difference is negligible. Across all dimensions, our homotopy generation algorithm closely tracks that of the baseline Grid algorithm. These visualizations verify visually that our homotopy generation algorithm is accurate. Conformity Function for Quadratic Loss We also visualize what the conformality function π(z) looks like across several datasets for the Quadratic Loss function. We do not include these in the main manuscript since our algorithm is exact on the Quadratic Loss function, and no visual verification is truly needed. Nevertheless, we provide such visuals in <Ref>. Our Conformal Prediction algorithm indeed matches precisely that of the Grid baseline algorithm. This confirms our claims that our algorithm is indeed exact on the Quadratic Loss function. § PROOFS FOR PROPERTIES OF GLM'S §.§ Proof of <Ref> * The Fermat rule reads 0 ∈{X^⊤∂_2 f(y(z),y^⋆(z))} + λ∂·_1(β^⋆(z)) . Defining v(z) ∈∂·_1(β^⋆(z)) yields <Ref>. To show <Ref>, we look at v_j(z) for any index j. We remind that by separability of the ℓ_1 norm, we have v_j(z) = ∂ |·|(β_j^⋆(z)). Hence, v_j(z) = sign(β_j^⋆(z)) if β_j^⋆(z) ≠ 0 and v_j(z) ∈ [-1, 1] otherwise. This proves the claim. §.§ Proof of <Ref> * We first prove that A(z) is unique. From the definition of the active set, we have A(z) = {j ∈ [p]: |X_j^⊤∂_2 f(y(z),Xβ^⋆(z))| = λ}, where we remind that β^⋆(z) ∈_β∈^|p| f(y(z),X β) + λβ_1 . Since, from strict convexity of f, the prediction Xβ^⋆(z) is unique for any solution β^⋆(z) to the aforementioned optimization problem, we have A(z) is uniquely defined. From the first order optimality condition, it exists v(z) ∈∂·_1(β^⋆(z)) 0 ∈ X^⊤∂_2 f(y(z), Xβ^⋆(z)) + λ v(z) . Restricted to the active set yields 0 ∈ X_A^⊤∂_2 f(y(z), X_Aβ_A^⋆(z)) + λ v_A(z) ⟺β_A^⋆(z) ∈_w ∈^|A| f(y(z),X_A w) + λw_1 . Since f is strictly convex and X_A is full rank, the latter optimization problem is strictly convex meaning β_A^⋆(z) is unique. §.§ Proof of <Ref> []lemcompact Let 0 be the vector of 0's, For all z ∈ [y_min, y_max], we have that the optimal weights β^⋆(z) satisfy {β^⋆(z) : z ∈ [z_min, z_max] }⊂{β : β_1 ≤R/λ} where R = sup_z ∈ [z_min, z_max] f(y(z), 0). Let's denote the objective function as P(β, z) = f(y(z), Xβ) + λβ_1 . We remind the reader that the solution β^⋆(z) satisfies β^⋆(z) = _β P(β, z). By optimality and assuming that f is non-negative, we have for any z λβ^⋆(z)_1 ≤ P(β^⋆(z), z) ≤ P(0, z) = f(y(z), 0) . Here, 0 is the vector of 0's, the first step comes from the definition of P, and the second inequality comes from the fact that β^⋆(z) is a minimizer of P. Naturally, we then have that the ℓ_1 norm of the the solving weights β^⋆(z) is bounded by the value of f(y(z), 0). Any solution β^⋆(z) is inside the ℓ_1 ball centered at 0 with radius R/λ. Since the path is truncated z ∈ [z_min, z_max], then the solution path is bounded {β^⋆(z) : z ∈ [z_min, z_max] }⊂{β : β_1 ≤R/λ}, where R= sup_z ∈ [z_min, z_max] f(y(z), 0) . Also, it is easy to see that, along the path y(z)_2 ≤max(y(z_min)_2, y(z_max)_2) y^⋆(z)_2 = X_A β_A^⋆_2 ≤σ_max(X_A) × R/λ ζ(z)_2 = √(y(z)_2^2 + y^*(z)_2^2) ≤√(max(y(z_min)_2, y(z_max)_2)^2 + (σ_max(X_A) × R/λ)^2 ) =: B . Note that, for simplicity, we naturally suppose that the estimate β̂(z) is a better minimizer than the vector 0. Thus, the same bounds above hold for β̂(z), ŷ(z) and ζ̂(z). § CHOICE OF THE RANGE [Z_MIN, Z_MAX] []lemjustified Choosing z_0 = z_max = max(y_1, …, y_n) and stopping once z_t ≤ z_min = min(y_1, … y_n) reduces the probability of coverage by at most 2/n+1. Given our exchangability assumption, the probability that y_n+1≥ z_max is at most 1/n+1. Similarly, the probability that y_n+1≤ z_min is at most 1/n+1. Therefore, using the union bound, the probability that choosing the criteria we do in our algorithm affects coverage by at most 2/n+1, which becomes negligible as n grows. § DETAILS ON ∂_2 F In the main text, we mentioned that we are approximating ∂_2 f(y(z^+), y^⋆(z^+)). We can do this via linearization. ∂_2 f ∘ζ(z^+) ≈∂_2 f ∘ζ(z) + ∂_2, 1 f ∘ζ(z) (y(z^+) - y(z)) + ∂_2, 2 f ∘ζ(z)(y^⋆(z^+) - y^⋆(z)) Moreover, y(z^+) - y(z) = (0, …, 0, z^+ - z) y^⋆(z^+) - y^⋆(z) = X_A (β_A^⋆(z^+) - β_A^⋆(z)) ≈ X_A ∂β_A^⋆/∂ z(z) × (z^+ - z) Finally, with a plug-in approach, we approximate β̂_A^'(z) ≈∂β_A^⋆/∂ z(z) and obtain ∂_2 f ∘ζ(z^+) ≈∂_2 f ∘ζ(z) + ( [∂_2,1 f ∘ζ(z)]_n+1 + [∂_2, 2 f ∘ζ(z)] X_A β̂_A^'(z) ) × (z^+ - z) Here, the first equality come from definition, the second is from applying the chain rule to intermediate variables, the third is from a simple notational switch, and the final equality comes from using our estimators for β^⋆. Now, we can find the roots of ℐ from <Ref> as the following z_j, t+1^in = z_t + -X_j^⊤∂_2 f ∘ζ̂(z_t) ±λ/X_j^⊤[[∂_2,1 f∘ζ̂(z_t)]_n+1 + ∂_2,2f∘ζ̂(z_t)^⊤ X_A β̂_A^'(z_t)]. We can now present our desired theorem. lemboundgradient The gradient of the solution path ∂β^⋆/∂ z(z), as well as its estimates β̂^'(z) are bounded as follow max( ∂β^⋆/∂ z(z), β̂^'(z)) ≤σ_max(X_A(z))/σ_min^2(X_A(z))×ν_f/μ_f. We remind that ∂ y/∂ z = (0, …, 0, 1)^⊤ and ∂β_A^⋆/∂ z = -(∂ H/∂β)^-1∂ H/∂ y∂ y/∂ z β̂_A^' = -(∂ H/∂β)^-1∂ H/∂ y∂ y/∂ z ∂ H/∂β = X_A^⊤∂_2, 2 f ∘ζ(z)X_A ∂ H/∂β = X_A^⊤∂_2, 2 f∘ζ̂(z)X_A ∂ H/∂ y = X_A^⊤∂_2,1 f ∘ζ(z) ∂ H/∂ y = X_A^⊤∂_2,1 f∘ζ̂(z) Hence ∂β_A^⋆/∂ z_2 ≤(∂ H/∂β)^-1_2∂ H/∂ y_2 By definition, we have that for any z ∈ [z_min, z_max], ∂ H/∂β (y(z), β_A^⋆(z))_2 = X_A^⊤∂_2, 2 f∘ζ(z) X_A_2 ≥σ_min(X_A^⊤ X_A) ×inf_ζ≤ Bσ_min(∂_2, 2f ∘ζ(z) ). Since f is assumed to be μ_f-strongly convex from <Ref>, it holds inf_ζ≤ Bσ_min(∂_2, 2f ∘ζ(z) ) ≥μ_f > 0 , and then (∂ H/∂β)^-1_2 ≤1/σ_min^2(X_A) ×μ_f. Similarly, given f is smooth with constant ν_f from <Ref>, we have ∂ H/∂ y (y(z), β_A^⋆(z)) _2 = X_A^⊤∂_2,1 f ∘ζ(z) _2 ≤σ_max(X_A) ∂_2,1 f ∘ζ(z) _2 ≤σ_max(X_A) ×ν_f . Hence the result. The proof for upper-bounding the estimated gradient norm follows the same line. * To analyze β̃(z^+) - β^⋆(z^+)_2, we will use the definition from our algorithm that β̃(z^+) = β̂(z_t) + β̂^'(z_t) (z^+ - z_t). Here, z_t is the last point at which we ran our primal corrector. Using this, we can decompose the error as the follows: β̃(z^+) - β^⋆(z^+)_2 = β̂(z_t) + β̂^'(z_t) (z^+ - z_t) - β^⋆(z^+) _2 = β̂(z_t) + β̂^'(z_t) (z^+ - z_t) - β^⋆(z^+) + β^⋆(z_t) - β^⋆(z_t) _2 = β̂(z_t) - β^⋆(z_t) + ∫_z_t^z^+[β̂^'(z_t) - ∂β^⋆(z)/∂ z] dz _2 ≤β̂(z_t) - β^⋆(z_t)_2 + sup_z ∈ [z^+, z_t]β̂^'(z_t) - ∂β^⋆(z)/∂ z_2 |z^+ - z_t| Here, the third equality comes from the fact that β^*(z^+)- β^*(z_t) = ∫_z_t^z^+∂β^⋆(z)/∂ z dz. Now, from the Triangular Inequality, we have sup_z ∈ [z^+, z_t]β̂^'(z_t) - ∂β^⋆(z)/∂ z_2 ≤β̂^'(z_t) + sup_z ∈ [z^+, z_t]∂β^⋆(z)/∂ z_2 ≤[ σ_max(X_A(z_t))/σ_min^2(X_A(z_t)) + sup_z ∈ [z^+, z_t]σ_max(X_A(z))/σ_min^2(X_A(z))] ×ν_f/μ_f Here, the second inequality comes from <Ref>. Now, if point z^+ is such that z^+ ∈ (z_t+1, z_t], the active sets at point z^+ and z_t are constant. Then, we can simplify. β̃(z^+) - β^⋆(z^+)_2 ≤ϵ_tol + 2 σ_max(X_A(z_t))/σ_min^2(X_A(z_t))×ν_f/μ_f× |z^+ - z_t| . Note that if the candidate z^+ = z_t is exactly a kink, the right-most term is zero. It only remains the corrector error. If it is not the case that z^+ and z_t have the same active set, we have β̃(z^+) - β^⋆(z^+)_2 ≤ϵ_tol + L ×ν_f/μ_f× |z^+ - z_t| where L := [σ_max(X_A(z_t))/σ_min^2(X_A(z_t)) + sup_z ∈ [z^+, z_t]σ_max(X_A(z))/σ_min^2(X_A(z))] * Let us define, for t ∈ [0, 1], the function ϕ(t) = ∂_2 f(ζ̂(z^+) + t(ζ(z^+) - ζ̂(z^+))) . We have from the fundamental theorem of calculus, ϕ(1) - ϕ(0) = ∫_0^1∂ϕ(t)/∂ t dt where ϕ(1) - ϕ(0) = ∂_2 f(ζ(z^+)) - ∂_2 f(ζ̂(z^+)) and ∂ϕ(t)/∂ t = ∂_2,1 f(ζ̂(z^+) + t(ζ(z^+) - ζ̂(z^+)))^⊤[ζ(z^+) - ζ̂(z^+)]_1 + ∂_2,2 f(ζ̂(z^+) + t(ζ(z^+) - ζ̂(z^+)))^⊤[ζ(z^+) - ζ̂(z^+)]_2 . We remind the reader that, by definition, we have [ζ(z^+) - ζ̂(z^+)]_1 = y(z^+) - y(z^+) = 0 [ζ(z^+) - ζ̂(z^+)]_2 = y^⋆(z^+) - ŷ(z^+) and deduce that ∂_2 f ∘ζ(z^+) - ∂_2 f ∘ζ̂(z^+)_2 = ϕ(1) - ϕ(0)_2 = ∫_0^1∂ϕ(t)/∂ t dt_2 =∫_0^1∂_2,2 f(ζ̂(z^+) + t(ζ(z^+) - ζ̂(z^+)))^⊤[ζ(z^+) - ζ̂(z^+)]_2 dt _2 ≤sup_t ∈ [0, 1]∂_2,2 f(ζ̂(z^+) + t(ζ(z^+) - ζ̂(z^+)))_2 y^⋆(z^+) - ŷ(z^+)_2 ≤ν_f ×X_A β_A^⋆(z^+) - X_A β̂_A(z^+)_2 ≤ν_f ×σ_max(X_A) ×β_A^⋆(z^+) - β̂_A(z^+)_2 ≤ν_f ×σ_max(X_A) ×[ ϵ_tol + L ×ν_f/μ_f× |z^+ - z_t| ] . Here, the fourth inequality comes from our <Ref> and the final inequality comes from <Ref>. § PARTIAL LINEARIZATION AS ALTERNATIVE METHODS In addition to the approximation algorithm described and analyzed above, we briefly describe a more precise but more costly method in terms of computational time. The key point is to try to capture the non-linearity of the solution-path as the input z varies or similarly when the regularization parameter λ varies. For the sake of simplicity, we describe a solution path that exploits the exact solution at each node. The practical algorithm will be based on a plug-in approach similar to that used above. In the following, we note (deleting the star notation to avoid clutter) β(z, λ) ∈_β∈ℝ^p f(y(z),Xβ) + λβ_1 . Let us define f_z(q) = f(y(z), q) and linearly approximate the function q ↦∇ f_z(q) at q_0 ∇ f_z(q) ≈∇ f_z(q_0) + ∇^2 f_z(q_0)^⊤ (q - q_0) We denote q = X_Aβ_A(z, λ) and q_0 = X_Aβ_A(z_0, λ). From the optimality condition <ref>, we have We only linearize second variable of ∂_2 f which is independent of z X_A^⊤∇ f_z(q) = - λ v_A(z, λ) X_A^⊤( ∇ f_z(q_0) + ∇^2 f_z(q_0)^⊤ (q - q_0) ) (<ref>)≈ - λ v_A(z, λ) X_A^⊤∇^2 f_z(q_0)^⊤ q ≈ X_A^⊤(∇^2 f_z(q_0)^⊤ q_0 - ∇ f_z(q_0) ) - λ v_A(z, λ) Hence β_A(z, λ) ≈(X_A^⊤∇^2 f_z(q_0)^⊤ X_A)^-1( X_A^⊤(∇^2 f_z(q_0)^⊤ q_0 - ∇ f_z(q_0) ) - λ v_A(z, λ) ) We recover exactly the Lasso formula when f is quadratic and also we only need to know v and not the dual variable. Path λ (z is fixed). We have the two situations for a change in the active set: * A nonzero variable becomes zero ∃ j ∈ A(z, λ) such that : β_j(z, λ) ≠ 0 and β_j(z, λ_j, out) = 0 . * A zero variable becomes nonzero ∃ j ∈ A^c(z, λ) such that : |X_j^⊤∇ f_z(Xβ(z, λ_j, in))| = λ_j, in. Then λ_next = max(max_j ∈ A(z, λ)λ_j, out, max_j ∈ A^c(z, λ)λ_j, in) . We can obtain approximate kinks in the lmd path by closed form solution and no need to invert multiple times. Path z (λ is fixed). We have the two situations for a change in the active set: * A non-zero variable becomes zero ∃ j ∈ A(z, λ) such that : β_j(z, λ) ≠ 0 and β_j(z_j, out, λ) = 0 . * A zero variable becomes nonzero ∃ j ∈ A^c(z, λ) such that : |X_j^⊤∇ f_z_j, in(Xβ(z_j, in, λ))| = λ. Then z_next = max(max_j ∈ A(z, λ) z_j, out, max_j ∈ A^c(z, λ) z_j, in) . The core drawbacks is that <ref> is non-linear in z which makes the kink finder more complicated. Hence we need to use a root-finding (bisection search) algorithm to estimate accurately the root. This require re-computing both (X_A^⊤∇^2 f_z(q_0)^⊤ X_A)^-1 and ∇ f_z(q_0) at every trial value z a dozen number of times which can be expensive. To overcome this issue, we propose to linearize both in the first and second variable We remind that ∇ f_z(q) = ∂_2 f(y(z), q); both notation will be used for simplicity or clarity. Using a first order approximation, we have ∂_2 f(y(z), q) ≈∂_2 f(y(z_0), q_0) + ∂_2 1 f(y(z_0), q_0)^⊤ (y(z) - y(z_0)) + ∂_2 2 f(y(z_0), q_0)^⊤ (q - q_0) Using back the compact notation, we have ∂_2 2 f(y(z_0), q_0) = ∇^2 f_z_0( q_0) and ∂_2 f(y(z_0), q_0) = ∇ f_z_0(q_0). Also, we have y(z) - y(z_0) = (0, …, 0, z - z_0), which implies that ∂_2 1 f(y(z_0), q_0)^⊤ (y(z) - y(z_0)) = ∂̃_n+1 f(z_0) (z - z_0) where we denoted ∂̃_n+1 f(z_0) the last coordinate of ∂_2 1 f(y(z_0), q_0). Finally, we can plug the linear approximation into the optimality condition and obtain - λ v_A(z, λ) = X_A^⊤∇ f_z(q) - λ v_A(z, λ) (<ref>)≈ X_A^⊤( ∇ f_z_0(q_0) + ∂̃_n+1 f(z_0) (z - z_0) + ∇^2 f_z_0( q_0)^⊤ (q - q_0) ) X_A^⊤∇^2 f_z_0(q_0)^⊤ q ≈ X_A^⊤(∇^2 f_z_0(q_0)^⊤ q_0 - ∇ f_z_0(q_0) - ∂̃_n+1 f(z_0) (z - z_0) ) - λ v_A(z, λ) Hence β_A(z, λ) ≈(X_A^⊤∇^2 f_z_0(q_0)^⊤ X_A)^-1( X_A^⊤(∇^2 f_z_0(q_0)^⊤ q_0 - ∇ f_z_0(q_0) - ∂̃_n+1 f(z_0) (z - z_0) ] - λ v_A(z, λ) ) Now the <Ref> is linear both in z and λ but cheap to compute whereas <ref> capture the non linearity in z. The point of this section was to show that several more or less precise approximations can be easily constructed, and they lead to different properties. For example, for optimization purposes, it is more interesting, but unfortunately more costly, to capture the non-linearity of the solution path as much as possible. We haven't taken this option in this article, as we've observed in the examples we've tested that prediction accuracy is more important than estimation (of the optimal solution) accuracy when it comes to calculating conformal prediction sets. Another interesting approach could be to adopt paths based on checking the support of the optimal solution <cit.> when the input data of the z or λ of the problem changes. Among other things, this ensures that the active sets used always contain the optimum active set at all points.
http://arxiv.org/abs/2307.07328v1
20230714131221
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy
[ "Zihao Zhu", "Mingda Zhang", "Shaokui Wei", "Li Shen", "Yanbo Fan", "Baoyuan Wu" ]
cs.CR
[ "cs.CR", "cs.LG" ]
Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy Zihao Zhu[1] Mingda Zhang[1] Shaokui Wei[1] Li Shen[2] Yanbo Fan[3] Baoyuan Wu[1]Corresponding Author [1]School of Data Science, Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen [2]JD Explore Academy [3]Tencent AI Lab {zihaozhu, mingdazhang, shaokuiwei}@link.cuhk.edu.cn; {mathshenl, fanyanbo0124}@gmail.com; [email protected] August 12, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================== Data-poisoning based backdoor attacks aim to insert backdoor into models by manipulating training datasets without controlling the training process of the target model. Existing attack methods mainly focus on designing triggers or fusion strategies between triggers and benign samples. However, they often randomly select samples to be poisoned, disregarding the varying importance of each poisoning sample in terms of backdoor injection. A recent selection strategy filters a fixed-size poisoning sample pool by recording forgetting events, but it fails to consider the remaining samples outside the pool from a global perspective. Moreover, computing forgetting events requires significant additional computing resources. Therefore, how to efficiently and effectively select poisoning samples from the entire dataset is an urgent problem in backdoor attacks. To address it, firstly, we introduce a poisoning mask into the regular backdoor training loss. We suppose that a backdoored model training with hard poisoning samples has a more backdoor effect on easy ones, which can be implemented by hindering the normal training process (, maximizing loss mask). To further integrate it with normal training process, we then propose a learnable poisoning sample selection strategy to learn the mask together with the model parameters through a min-max optimization. Specifically, the outer loop aims to achieve the backdoor attack goal by minimizing the loss based on the selected samples, while the inner loop selects hard poisoning samples that impede this goal by maximizing the loss. After several rounds of adversarial training, we finally select effective poisoning samples with high contribution. Extensive experiments on benchmark datasets demonstrate the effectiveness and efficiency of our approach in boosting backdoor attack performance. § INTRODUCTION Training large-scale deep neural networks (DNNs) often requires massive training data. Considering the high cost of collecting or labeling massive training data, users may resort to downloading publicly free data from an open-sourced repository or buying data from a third-party data supplier. However, these unverified data may expose the model training to a serious threat of data-poisoning based backdoor attacks. Through manipulating a few training samples, the adversary could insert the malicious backdoor into the trained model, which performs well on benign samples, but will predict any poisoned sample with trigger as the target class. Several seminal backdoor attack methods ( BadNets <cit.>, Blended <cit.>, SSBA <cit.>, SIG <cit.>, TrojanNN <cit.> ) have shown good attack performance (, high attack success rate while keeping high clean accuracy) on mainstream DNNs. Most of these attack methods focus on designing diverse triggers ( patch trigger <cit.>, or signal trigger <cit.>), or the fusion strategy of inserting the trigger into the benign sample ( alpha-blending adopted in Blended <cit.>, digital steganography adopted in SSBA <cit.>), to make the poisoned samples stealthy and effective. However, it is often assumed that a few benign samples are randomly selected from the benign training dataset to generate poisoned samples. Some recent works <cit.> suggest that not all data are equally useful for training DNNs — some have greater importance for the task at hand or are more rich in informative content than others. Several selection strategies, such as uncertainty-based <cit.>, influence function <cit.>, forgetting events <cit.>, have been proposed to mine important samples for coreset selection <cit.>, data valuation <cit.> and active learning <cit.>. It inspires us to explore whether the backdoor performance could be boosted if the samples to be poisoned are selected according to some strategies rather than randomly, especially depending on the trigger and benign data. This underappreciated problem has rarely been studied in the backdoor learning community, and there is only one attempt <cit.> try to solve it. A filtering-and-updating strategy (FUS) <cit.> has been proposed to filter poisoning samples within a fixed-size sample pool based on forgetting events <cit.>, while disregarding the remaining samples beyond the pool, which is a local perspective. Besides, computing forgetting events for each updating step requires the same number of epochs as the full training process, resulting in a significant increase in computational cost, which is impractical in real-world scenarios. Hence, how to efficiently and effectively select samples to be poisoned with a global perspective from the entire dataset, while maintaining general applicability to diverse backdoor attacks is still an urgent problem to be solved. To address the aforementioned issue, we propose a Learnable Poisoning sample Selection strategy (LPS) that depends on triggers, poisoned fusion strategies, and benign data. The key idea behind it is that if we can successfully implant the backdoor into the model through hard poisoning samples, the backdoor behavior can be effectively generalized to other easier samples at the inference stage. A learnable binary poisoning mask is first introduced into the regular backdoor training loss (<ref>). Then finding hard samples can intuitively be obtained by hindering backdoor training process (, maximize loss ). In order to further fuse it with normal backdoor training, we consequently formulate the poisoning sample selection as a min-max optimization via an adversarial process. During the min-max two-player game, the inner loop optimizes the mask to identify hard poisoning sample, while the outer loop optimizes the model's parameters to train a backdoored model based on the selected samples. By adversarially training the min-max problem over multiple rounds, we finally obtain the high-contributed poisoning samples that serve the malicious backdoor objective. The proposed LPS strategy can be naturally adopted in any off-the-shelf data-poisoning based backdoor attacks. Extensive evaluations with state-of-the-art backdoor attacks are conducted on benchmark datasets. The results demonstrate the superiority of our LPS strategy over both the random selection and the FUS strategy <cit.>, while resulting in significant computational savings. The main contributions of this work are three-fold. 1) We propose a general backdoor training loss that incorporates a binary poisoning mask. 2) We propose a learnable poisoning sample selection strategy by formulating it as a min-max optimization problem. 3) We provide extensive experiments to verify the effectiveness of the proposed selection strategy on significantly boosting existing data-poisoning backdoor attacks. § RELATED WORK Backdoor attack. According to the threat model, existing backdoor attacks can be partitioned into two categories: data-poisoning based <cit.> and training-controllable based <cit.>. In this work, we focus on the former threat model, where the adversary can only manipulate the training dataset and the training process is inaccessible. Thus, here we mainly review the related data-poisoning based attacks, and we refer readers to recent surveys <cit.> for a detailed introduction to training-controllable attacks. BadNets <cit.> was the first attempt to stamp a patch on the benign image as the poisoned image, revealing the existence of backdoor in deep learning. Blended  <cit.> used the alpha blending strategy to make the trigger invisible to evade human inspection. SIG <cit.> generated a ramp or triangle signal as the trigger. TrojanNN attack <cit.> optimized the trigger by maximizing its activation on selected neurons related. SSBA <cit.> adopted a digital stenography to fuse a specific string into images by autoencoder, to generate sample-specific triggers. Subsequently, more stealthy and effective attacks <cit.> have been successively proposed. Meanwhile, some defense methods <cit.> have been proposed as shields to resist attacks. The commonality of above attacks is that they focused on designing triggers or the fusion strategy, while overlooking how to select benign samples to generate poisoned samples, and simply adopted the random selection strategy. Instead, we aim to boost existing data-poisoning backdoor attacks through a learnable poisoning sample selection strategy depending on the trigger and benign data. The filtering step is based on the forgetting events <cit.> recorded on a small number of adversaries, which ensures that the differences between samples can emerge. Afterwards, some new poisoned samples are sampled randomly from the candidate set to update the pool. The above two steps are iterated several times to find a suitable solution. Poisoning sample selection in backdoor attack. To the best of our knowledge, there is only one work <cit.> focusing on poisoning sample selection for backdoor attack. A filtering-and-updating strategy (FUS) has been proposed in <cit.> to iteratively filter and update a sample pool. The filtering step filters easily forgotten poisoning samples based forgetting events <cit.>, which are recorded by the same number of epochs as the full training process. Afterwards, some new poisoned samples are sampled randomly from the candidate set to update the pool. The above two steps are iterated several times to find a suitable solution. As the pioneering work, FUS shows good improvement in backdoor effect compared to the random selection strategy. However, FUS requires tens of times more computing resources, which is not acceptable in practice. § PRELIMINARY Threat model. We consider the threat model that the adversary can only manipulate the training dataset with the training process inaccessible, dubbed data-poisoning based backdoor attack. It applies to the scenario in which the user trains a neural network based on an unverified dataset. General procedure of data-poisoning based backdoor attacks. Here we describe the general procedure of data-poisoning based backdoor attacks. As shown in <ref>, it consists of 5 steps: 182 Design trigger (by adversary). The first step of backdoor attack is to design a trigger ϵ, of which the format could be diverse in different applications, such as one image with particular textures in computer vision tasks, as shown in the right part of <ref>. 183 Select samples to be poisoned (by adversary). Let ={(_i,y_i)}_i=1^|| denote the original benign training dataset that contains || i.i.d. samples, where _i ∈ denotes the input feature, y_i ∈={1,…,K} is the ground-truth label of _i. There are K candidate classes, and the size of class k is denoted as n_k. For clarity, we assume that all training samples are ranked following the class indices, , (samples of class 1), (samples of class 2), …, (samples of class K). To ensure stealthiness and avoid harm to clean accuracy, the adversary often selects a small fraction of benign samples to be poisoned. Here we define a binary vector =[m_1,m_2,…,m_||]∈{0,1}^|| to represent the poisoning mask, where m_i=1 indicates that _i is selected to be poisoned and m_i=0 means not selected. We denote α :=∑_i=1^||m_i /|| as the poisoning ratio. Note that most existing backdoor attack methods randomly select α· || samples to be poisoned. 184 Generate poisoned samples (by adversary). Given the trigger ϵ and the selected sample _i (, m_i = 1), the adversary will design some strategies to fuse ϵ into _i to generate the poisoned sample _i, , _i = g(_i, ϵ), with g(·, ·) denoting the fusion operator (  the alpha-blending used in Blended <cit.>). Besides, the adversary has authority to change the original ground-truth label y_i to the target label ỹ_̃ĩ. If target labels remain the same for all poisoning samples (, ỹ_̃ĩ=y_t), it is called all-to-one attack. If target labels have differnt types (, ỹ_̃ĩ=y_i+1), it is called all-to-all attack. If adversary does not change the ground-truth label (, ỹ_̃ĩ=y_i), it is called clean label attack. Thus, the generated poisoned training dataset could be denoted as ={(_i, y_i)|_if  m_i=0,  or  (_i, ỹ_̃ĩ)|_if  m_i=1}_i=1^||. 185 Train the target model (by user). Given the poisoned training dataset , the user trains the target model f_θ_t by minimizing the following loss function: (θ_t;) = 1/||∑_(, y) ∈ ℓ(f_θ_t(),y)) ≡ (θ_t;,,ϵ, g)=1/||∑_i=1^||[(1-m_i)·ℓ(f_θ_t(_i),y_i)) + m_i·ℓ(f_θ_t(_i), y_t)], where ℓ(·,·) is the loss function for an individual sample, such as cross-entropy loss. In <ref>, we extend <ref> by introducing binary poisoning mask m that described in step 2. 186 Activate the backdoor using the trigger during the inference stage (by the adversary) Given the trained model f_θ_t, the adversary expects to activate the injected backdoor using the trigger ϵ, , fooling f_θ_t to predict any poisoned sample g(_i, ϵ) as the target label ỹ_̃ĩ. at the end of the section, we should summarize the challenges in this process and how can we solve these challenges via the proposed selection strategy. Most backdoor attack methods concentrate on designing diverse triggers (, step 1) or the fusion strategy (, step 3). These attacks typically randomly select samples for poisoning (, step 2), neglecting the unequal influence of each poisoning samples to the backdoor injection. Recent FUS strategy <cit.> , as shown in <ref>, filters unimportant poisoning samples in a pool based on forgetting events <cit.>, while ignoring the rest of the samples outside the pool, which is a local perspective. Besides, since the characteristics of poisoning samples vary from different attacks, the selected samples that succeed in one attack may not be effective in others. Therefore, it is a challenging task to develop a poisoning sample selection strategy that can select poisoning samples from the entire dataset and be generally applicable to various backdoor attacks. § METHODOLOGY: LEARNABLE POISONING SAMPLE SELECTION STRATEGY This work aims to design a novel sample selection strategy to enhance the impact of a backdoor in the trained target model, denoted as f_θt. As the target model fθt is agnostic to adversaries, we adopt a surrogate model fθ_s as an alternative. In order to select poisoning samples from the entire dataset with a global perspective, we opt to directly generate the poisoning mask m in step 2. We suppose that if backdoor can been implanted into the model through training with hard poisoning samples, the backdoor can be generally activated by other easy samples during the inference stage. To achieve this, an intuitive way is to hinder the normal backdoor training from an opposite direction, , maximize the loss in <ref> given the surrogate model. To combine it with the normal training process (, minimize <ref>), we propose a Learnable Poisoning sample Selection (LPS) strategy to learn the poisoning mask along with the surrogate model's parameters θ_s through a min-max optimization: min_θ_smax_∈{0,1}^||{(θ_s, ; ,ϵ, g) s.t. = α̃·μ}, where is extended loss including poisoning mask that defined in <ref>. ∈{0,1}^K × || is defined as: in the k-th row, the entries (k, ∑_j=1^k-1 n_j + 1: ∑_j=1^k n_j) = 1, while other entries are 0. α̃ = α· ||/∑_k ≠ y_t n_k and α̃n_k is integer for all k. μ = [μ_1; μ_2; …; μ_K] ∈ℕ^K is defined as: if k≠ y_t, then μ_k = n_k, otherwise μ_k = 0. This equation captures three constraints, including: 1) α· || samples are selected to be poisoned; 2) the target class samples cannot be selected to be poisoned; 3) each non-target class has the same selected ratio α̃ to encourage the diversity of selected samples. Note that here we only consider the setting of all-to-one attack, but the constraint can be flexibly adjusted for all-to-all and clean label settings. [14]r0.53 Remark. This min-max objective function (<ref>) is designed for finding hard poisoning samples with high-contribution for backdoor injection via an adversarial process. Specifically, the inner loop encourages to select hard samples for the given model's parameters θ_s by maximizing the loss , while the outer loop aims to update θ_s by minimizing the loss f_θ_s to ensure that a good backdoored model can be still learned, even based on the hard poisoning mask . Thus, the two-palyer game between and θ_s is expected to encourage the selected samples to bring in good backdoor effect, while avoiding over-fitting to the surrogate model f_θ_s. Optimization. As summarized in <ref>, the min-max optimization (<ref>) could be efficiently solved by alternatively updating and θ_s as follows: 169 Outer minimization: given , θ_s could be updated by solving the following sub-problem: θ_s ∈min_θ_s   (θ_s; , ,ϵ, g). It could be optimized by the standard back-propagation method with stochastic gradient descent (SGD) <cit.>. Here we update θ_s for one epoch in each iteration. 169 Inner maximization: given θ_s, could be achieved by solving the maximization problem as: ∈max_∈{0,1}^|| {(; θ_s, ,ϵ, g),   s.t.   = α̃ ·μ}. Although it is a constrained binary optimization problem, it is easy to obtain the optimal solution. Specifically, given the hard constraint = α̃·μ, the above problem could be separated into K independent sub-problems, , max__k ∈{0,1}^n_k 1/|| {∑_i=1^|| 𝕀(y_i=k)·m_i ·[ ℓ(f_θ_s(_i), y_t) - ℓ(f_θ_s(_i),y_i) ],   s.t.  1_n_k^⊤_k = α̃ ·n_k}, for ∀ k ∈{1,2,…,K} except k=y_t. _k denotes the sub-mask vector of corresponding to samples of class k, and 𝕀(a) = 1 if a is true, otherwise 0. Note that some constant terms _k have been abandoned in the above sub-problem. And, since it is constrained that only non-target class samples can be selected, _y_t is always a zero vector. It is easy to obtain the optimal solution by firstly calculating ℓ(f_θ_s(_i), y_t) - ℓ(f_θ_s(_i),y_i) for all samples satisfying 𝕀(y_i=k)=1 and ranking them in descending order, then picking the top-(α̃· n_k) indices to set the corresponding m_i as 1, while others as 0. § EXPERIMENTS §.§ Experimental settings Implementation details. For the training of both surrogate models and target models, we adopt SGD optimizer with weight decay 5e 4, the batch size 128, the initial learning rate 0.01 and reduced by 10 after 35 and 55 epochs, respectively. The training epoch for target models is 100. The maximal iteration T is set as 15. All experiments are conducted on NVIDIA GTX 3090 GPUs. Datasets and models. We evaluate on three commonly used benchmark datasets: CIFAR-10 <cit.>, CIFAR-100 <cit.> and Tiny-ImageNet <cit.>. The surrogate model and target model are ResNet-18<cit.> and ResNet-34, respectively. Baselines of poisoning sample selection. We compare our proposed LPS strategy with two existing poisoning sample selection strategies: Random and FUS <cit.>. Random strategy selects benign samples following a uniform distribution. FUS <cit.> selects samples according to the sample importance measured by forgetting events[Note that in the experiments reported in <cit.>, FUS appended the generated poisoned samples onto the original benign dataset, rather than replacing the selected benign samples, leading to ||≥||. To ensure fair comparison, we change it to the traditional setting in existing attacks that the selected benign samples to be poisoned are replaced by the generated samples, thus ||=||.]. Following the original setting in <cit.>, we set 10 overall iterations and 60 epochs for updating the surrogate model in each iteration. Backdoor attacks. We consider 5 representative backdoor attacks: 1) visible triggers: BadNets <cit.>, Blended <cit.>; SIG <cit.>; 2) optimized triggers: Trojan-Watermark (Trojan-WM) <cit.>; 3) sample-specific triggers: SSBA <cit.>. In addition, we consider 3 poisoning label types: all-to-one, all-to-all and clean label. We visualize different triggers with the same benign image in <ref>. The detailed settings of each attack can been found in supplement materials. Backdoor defenses. We select 6 representative backdoor defenses to evaluate the resistance of above attack methods with different poisoning sample selection strategies, including Fine-Tuning (FT), Fine-Pruning (FP) <cit.>, Anti-Backdoor Learning (ABL) <cit.>, Channel Lipschitzness Pruning (CLP) <cit.>, Neural Attention Distillation (NAD) <cit.>, Implicit Backdoor Adversarial Unlearning (I-BAU) <cit.>. The detailed settings of each defense can been found in supplement materials. §.§ Main results We evaluate our LPS strategy under various experimental settings, including comparisons with baseline strategies on various attacks and poisoning ratios, comparisons on different datasets and resistance to defenses. The attack results on CIFAR-10, CIFAR-100, and Tiny-ImageNet can be found in Tab. <ref>,<ref>,<ref> respectively. Additionally, <ref> presents the defense results on CIFAR-10. Besides, we find that due to the low poisoning ratios, the impacts of different poisoning sample selection strategies on the clean accuracy are almost similar (as shown in <ref>). Thus, for clarity, we omit ACC in most result tables, except for <ref>. Three random trials are conducted for the main experiments to report the mean and standard deviation. More results about different models can be found in supplement materials. Compare with state-of-the-art baselines. To verify the effectiveness of our proposed LPS strategy, we first compare with two existing strategies on CIFAR-10, in which the surrogate model is ResNet-18 and the target model is ResNet-34. Different from <cit.>, we conduct experiments under low poisoning ratios (< 1%), which is more stealthy and more likely to escape human inspection. The attack success rate is shown in <ref>, where #Img/Cls denotes the number of samples to be poisoned per class for all-to-one setting, and pratio is short for poisoning ratio. 1) From a global view, we observe that LPS strategy outperforms the baselines under most of the settings. For example, with 0.216% poisoning ratio, LPS strategy can boost BadNets (all-to-all) by 30.61% compared to FUS, and Blended (all-to-one) can be improved by 13.53%. 2) From the perspective of poisoning ratios, LPS strategy can be widely applied to different poisoning ratios, but the degree of improvement is also related to the poisoning ratio. Specifically, when the poisoning ratio is extremely low (, 1 Img/Cls, 0.054% pratio), although the improvement of our method is not obvious compared with other strategies due to the attack itself being weak, it also shows similar results. However, once the poisoning ratio is increased, LPS shows a strong advantage over other strategies. 3) From the perspective of attacks, our LPS strategy consistently improves different types of triggers and poisoning labels, demonstrating that LPS strategy is widely applicable to various backdoor attacks. Compare with different datasets. To verify whether our proposed LPS strategy supports larger datasets (more images and classes, larger image size), we also evaluate these three strategies on CIFAR-100 and Tiny-ImageNet. The results in <ref> further demonstrate the superiority of LPS strategy to both the random selection and the FUS strategy. Resistance to backdoor defense. We further evaluate the resistance against defenses of different poisoning sample selection strategies. The defense results are shown in <ref>. It can be seen our method outperforms others in most cases (higher ASR is better), indicating that a reasonable poisoning sample selection strategy probably makes the attack better resistant to defenses. §.§ Ablation studies r0.5 table Ablation studies of LPS’s constraints. ! Attack Pratio LPS LPS\_ET LPS\_ET,PC FUS<cit.> BadNets<cit.> 0.216% 80.58 75.33 71.47 68.01 Blended<cit.> 0.432% 87.20 85.72 82.71 79.06 SSBA<cit.> 0.432% 23.29 21.18 20.36 14.86 Trojan-WM<cit.> 0.216% 93.27 89.91 87.80 77.63 Effects of different constraints in LPS. As demonstrated under <ref>, the equation = α̃·μ captures three constraints, including satisfying the poisoning ratio, excluding the target class (dubbed ET), and selecting the same number of samples per class (dubbed PC), respectively. Here we compare LPS with its two variants of changing the last two constraints, including: 1) LPS without excluding target class (LPS\_ET), 2) LPS\_ET without selecting the same number of poisoned samples per class (LPS\_ET,PC). The results in <ref> show that both constraints are important for the LPS strategy. Note that even removing two constraints, LPS\_ET,PC still outperforms FUS. r0.5 < g r a p h i c s > Attack results of LPS strategy on CIFAR-10 under different iterations T. Effect of the number of iterations T. In <ref>, our LPS method requires iteratively solving a min-max optimization problem. Here we explore the effect of different iterations T on the attack results. As shown in <ref>, we evaluate LPS strategy in a wide range of iterations from 1 to 50. We can see that LPS strategy shows stable and high performance in the range T∈[10,20]. Therefore, we choose T=15 as the default setting of the main experiments. § ANALYSIS Analysis of computational complexity. Both LPS and FUS adopt the iterative algorithm by alternatively updating the surrogate model and the poisoning mask. In term of updating the surrogate model in each iteration, the complexity is O(||K(F+B)), with || being the train data size, F is the cost of forward pass in a DNN model and B being the backward <cit.> pass cost, K being the number of epochs. In terms of updating the poisoning mask, it requires one forward pass for all training samples, then the complexity is O(|| F). Thus, the overall complexity of both LPS and FUS is O(T||((K+1)F+KB)), T being the number of overall iterations. It is notable that in FUS, the surrogate model is re-initialized in each iteration, so it has to set K as a large number (, 60), while our LPS sets K as 1. Thus, our practical efficiency is much better than FUS. We compare the full training time of different strategies in the supplement materials. r0.65 < g r a p h i c s > Visualization of samples selected by our LPS (a) and FUS (b). Visualization of selected samples. In <ref>, we visualize some samples selected by our method and FUS from Tiny-ImageNet, from which we can find that our method differs from FUS in two aspects. First, our method tends to select samples with discriminative patterns that is easy to remember. It indicates that our method prefers samples with higher clean confidence. Second, the samples selected by our method have a higher inter-class similarity. To evaluate the inter-class similarity, we compute the average pairwise Structural Similarity Index (SSIM)<cit.> within each class over samples selected by our method and FUS, respectively. Since some classes are ignored by FUS, we only report the classes selected by both our method and FUS. The results are reported in <ref> which show that our LPS has higher inter-class similarity. The importance of selected samples. In <ref>, we present the distribution for forgetting events histogram of blended trigger poisoned samples from CIFAR-10 obtained using different strategies at a very low poisoning ratio. Forgetting events were calculated during the standard training of the target model, given the poisoning masks obtained by different strategies. The results show that DNN trained with poisoned sample whose forgetting events is small have higher generalization performance. DNN do not force a poisoned sample in mind, losing generalization capability. § CONCLUSION AND FUTURE WORK This work has explored an often overlooked step in data-poisoning based backdoor attacks, , selecting which benign samples to generate poisoned samples. We innovatively propose a learnable poisoning sample selection strategy based on the trigger and benign data. It is formulated as a min-max optimization problem, where a surrogate model and a binary poisoning mask are learned together, to encourage the selected samples to have good backdoor effect when training the unknown target model. Extensive results validate the effectiveness and efficiency of the proposed LPS strategy in enhancing existing data-poisoning backdoor attacks. Limitations and future works. Note that in the case of extremely low poisoning ratio, the improvement of LPS is very limited, mainly due to that the poisoning information contained in few poisoned samples with fixed triggers are insufficient to inject backdoor, no matter which poisoning samples are selected. It inspires that learning trigger and poisoning sample selection simultaneously may further enhance the backdoor attack, which will be explored in future. In addition, the proposed LPS strategy is specially designed for data poisoning backdoor attack. Developing the similar selection strategy for training controllable backdoor attack also deserves to be explored in future. Broader impacts. The proposed LPS strategy could be easily utilized by adversaries to enlarge the attack performance of existing backdoor attack methods, which exposes the urgency to develop proactive defense strategies and detection mechanisms to safeguard machine learning systems. plainnat
http://arxiv.org/abs/2307.04472v1
20230710104248
Partial Vessels Annotation-based Coronary Artery Segmentation with Self-training and Prototype Learning
[ "Zheng Zhang", "Xiaolei Zhang", "Yaolei Qi", "Guanyu Yang" ]
cs.CV
[ "cs.CV" ]
Partial Vessels Annotation-based Coronary Artery Segmentation Z. Zhang and X. Zhang—Contributed equally to this work. Z. Zhang et al. LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096, China [email protected]. of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, Nanjing, China Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing 210096, China Centre de Recherche en Information Biom´edicale Sino-Fran¸cais (CRIBs), Strasbourg, France Partial Vessels Annotation-based Coronary Artery Segmentation with Self-training and Prototype Learning Zheng Zhang1 Xiaolei Zhang2 Yaolei Qi1 Guanyu Yang1,3,4() August 12, 2023 ======================================================================================================= Coronary artery segmentation on coronary-computed tomography angiography (CCTA) images is crucial for clinical use. Due to the expertise-required and labor-intensive annotation process, there is a growing demand for the relevant label-efficient learning algorithms. To this end, we propose partial vessels annotation (PVA) based on the challenges of coronary artery segmentation and clinical diagnostic characteristics. Further, we propose a progressive weakly supervised learning framework to achieve accurate segmentation under PVA. First, our proposed framework learns the local features of vessels to propagate the knowledge to unlabeled regions. Subsequently, it learns the global structure by utilizing the propagated knowledge, and corrects the errors introduced in the propagation process. Finally, it leverages the similarity between feature embeddings and the feature prototype to enhance testing outputs. Experiments on clinical data reveals that our proposed framework outperforms the competing methods under PVA (24.29% vessels), and achieves comparable performance in trunk continuity with the baseline model using full annotation (100% vessels). § INTRODUCTION Coronary artery segmentation is crucial for clinical coronary artery disease diagnosis and treatment <cit.>. Coronary-computed tomography angiography (CCTA), as a non-invasive technique, has been certified and recommended as established technology in the cardiological clinical arena <cit.>. Thus, automatic coronary artery segmentation on CCTA images has become increasingly sought after as a means to enhance diagnostic efficiency for clinicians. In recent years, the performance of deep learning-based methods have surpassed that of conventional machine learning approaches (e.g. region growing) in coronary artery segmentation <cit.>. Nevertheless, most of these deep learning-based methods highly depend on accurately labeled datasets, which need labor-intensive annotations. Therefore, there is a growing demand for relevant label-efficient learning algorithms for automatic coronary artery segmentation on CCTA images. Label-efficient learning algorithms have garnered considerable interest and research efforts in natural and medical image processing <cit.>, while research on label-efficient coronary artery segmentation for CCTA images is slightly lagging behind. Although numerous label-efficient algorithms for coronary artery segmentation in X-ray angiograms have been proposed <cit.>, only a few researches focus on CCTA images. Qi et al. <cit.> proposed an elabrately designed EE-Net to achieve commendable performance with limited labels. Zheng et al <cit.> transformed nnU-Net into semi-supervised segmentation field as the generator of Gan, having achieved satisfactory performance on CCTA images. Most of these researches use incomplete supervision, which labels a subset of data. However, other types of weak supervision (e.g. inexact supervision), which are widely used in natural image segmentation <cit.>, are seldom applied to coronary artery segmentation on CCTA images. Different types of supervision are utilized according to the specific tasks. The application of various types of weak supervision are inhibited in coronary artery segmentation on CCTA images by the following challenges. 1) Difficult labeling (Fig. <ref>(a)). The target regions are scattered, while manual annotation is drawn slice by slice on the planes along the vessels. Also, boundaries of branches and peripheral vessels are blurred. These make the annotating process time-consuming and expertise-required. 2) Complex topology (Fig. <ref>(b)). Coronary artery shows complex and slender structures, diameter of which ranges from 2 mm to 5 mm. The tree-like structure varies individually. Based on these challenges and the insight that vessels share local feature (Fig. <ref>(b)), we propose partial vessels annotation and our framework as following. Given the above, we propose partial vessels annotation (PVA) (Fig. <ref>(c)) for CCTA images. While PVA is a form of partial annotation (PA) which has been adopted by a number of researches <cit.>, our proposed PVA differs from the commonly used PA methods. More specifically, PVA labels vessels continuously from the proximal end to the distal end, while the labeled regions of PA are typically randomly selected. Thus, our proposed PVA has two merits. 1) PVA balances efficiency and informativity. Compared with full annotation, PVA only requires clinicians to label vessels within restricted regions in adjacent slices, rather than all scattered target regions in each individual slice. Compared with PA, PVA keep labeled vessels continuous to preserve local topology information. 2) PVA provides flexibility for clinicians. Given that clinical diagnosis places greater emphasis on the trunks rather than the branches, PVA allows clinicians to focus their labeling efforts on vessels of particular interest. Therefore, our proposed PVA is well-suited for clinical use. In this paper, we further propose a progressive weakly supervised learning framework for PVA. Our proposed framework, using PVA (only 24.29% vessels labeled), achieved better performance than the competing weakly supervised methods, and comparable performance in trunk continuity with the full annotation (100% vessels labeled) supervised baseline model. The framework works in two stages, which are local feature extraction (LFE) stage and global structure reconstruction (GSR) stage. 1) LFE stage extracts the local features of coronary artery from the limited labeled vessels in PVA, and then propagates the knowledge to unlabeled regions. 2) GSR stage leverages prediction consistency during the iterative self-training process to correct the errors, which are introduced inevitably by the label propagation process. The code of our method is available at <https://github.com/ZhangZ7112/PVA-CAS>. To summarize, the contributions of our work are three-fold: * To the best of our knowledge, we proposed partial vessels annotation for coronary artery segmentation for the first time, which is in accord with clinical use. First, it balances efficiency and informativity. Second, it provides flexibility for clinicians to annotate where they pay more attention. * We proposed a progressive weakly supervised learning framework for partial vessels annotation-based coronary artery segmentation. It only required 24.29% labeled vessels, but achieved comparable performance in trunk continuity with the baseline model using full annotation. Thus, it shows great potential to lower the label cost for relevant clinical and research use. * We proposed an adaptive label propagation unit (LPU) and a learnable plug-and-play feature prototype analysis (FPA) block in our framework. LPU integrates the functions of pseudo label initialization and updating, which dynamically adjusts the updating weights according to the calculated confidence level. FPA enhances vessel continuity by leveraging the similarity between feature embeddings and the feature prototype. § METHOD As shown in Fig. <ref>, our proposed framework for partial vessels annotation (PVA) works in two stages. 1) The LFE stage(Sec. <ref>) extracts and learns vessel features from PVA locally. After the learning process, it infers on the training set to propagate the learned knowledge to unlabeled regions, outputs of which are integrated with PVA labels to initialize pseudo labels. 2) The GSR stage (Sec. <ref>) utilizes pseudo labels to conduct self-training, and leverages prediction consistency to improve the pseudo labels. In our proposed framework, we also designed an adaptive label propagation unit (LPU) and a learnable plug-and-play feature prototype analysis (FPA) block. LPU initialize and update the pseudo labels; FPA block learns before testing and improves the final output during testing. §.§ Local Feature Extraction Stage In LFE stage, our hypothesis is that the small areas surrounding the labeled regions hold valid information. Based on this, a light segmentation model 𝒮_l is trained to learn vessel features locally, with small patches centering around the labeled regions as input and output. In this manner, the negative impact of inaccurate supervision information in unlabeled regions is also reduced. §.§.§ Pseudo Label Initialization in LPU. After training, 𝒮_l propagates the learned knowledge of local feature to unlabeled regions. For each image of shape H× W× D, the corresponding output logit ŷ_1∈ [0,1]^H× W× D of 𝒮_l provides a complete estimate of the distribution of vessels, albeit with some approximation. Meanwhile, the PVA label y_PVA∈{0,1}^H× W× D provides accurate information on the distribution of vessels, but only to a limited extent. Therefore, LPU integrate ŷ_1 and y_PVA to initialize the pseudo label y_PL (Equ. <ref>), which will be utilized in GSR stage and updated during iterative self-training. y_PL^(t=0)(h,w,d)_∀ (h,w,d) ∈ℝ^H× W× D= 1, y_PVA(h,w,d)=1, ŷ_1(h,w,d), otherwise. §.§ Global Structure Reconstruction Stage The GSR stage mainly consists of three parts: 1) The segmentation model 𝒮_g to learn the global tree-like structure; 2) LPU to improve pseudo labels; 3) FPA block to improve segmentation results at testing. Through initialization (Equ. <ref>), the initial pseudo label y_PL^(t=0) contains the information of both PVA labels and the knowledge of local features in 𝒮_l. Therefore, at the beginning of this stage, 𝒮_g learns from y_PL^(t=0) to warm up. After this, logits of 𝒮_g are utilized to update the pseudo labels during iterative self-training. §.§.§ Pseudo Label Updating in LPU. The principle of this process is that more reliable logit influences more the distribution of the corresponding pseudo label. Based on this principle, first we calculate the confidence degree η^(t)∈ [0,1] for ŷ_2^(t). Defined by Equ. <ref>, η^(t) numerically equals to the average of the logits in labeled regions. This definition makes sense since the expected logit equals to ones in vessel regions and zeros in background regions. The closer ŷ_2^(t) gets to the expected logit, the higher η^(t) (confidence degree) will be. η^(t) = ∑_h∑_w∑_dy_PVA(h,w,d) ·ŷ_2^(t)(h,w,d)/∑_h∑_w∑_dy_PVA(h,w,d) Then, a quality control test is performed to avoid negative optimization as far as possible. If the confidence degree η^(t) is higher than all elements in the set {η^(i)}_i=1^t-1, the current logit is trustworthy to pass the test to improve the pseudo label. Then, y_PL^(t) is updated by the exponentially weighted moving average (EWMA) of the logits and the pseudo labels (Equ. <ref>). This process is similar to prediction ensemble <cit.>, which hase been adopted to filter pseudo labels<cit.>. However, different from their methods, where the factor η^(t) is a fixed hyperparameter coefficient and the pseudo labels are updated each or every several epoches, η^(t) in our method is adaptive and a quality control test is performed. y_PL^(t)= η^(t)ŷ_2^(t)+(1-η^(t))y_PL^(t-1), η^(t)=max{{η^(i)}_i=1^t} y_PL^(t-1), otherwise. §.§.§ Feature Prototype Analysis Block. Inspired by <cit.>, which generates class feature prototype ρ _c (Equ. <ref>) from the embeddings z^l_i of labeled points in class c, we inherit the idea but further transform the mechanism into the proposed learnable plug-and-play block, FPA block. Experimental experience finds that the output of FPA block has good continuity, for which the FPA output are utilized to enhance the continuity of convolution output at testing. ρ _c = 1/|ℐ_c |∑_z^l_i∈ℐ_cz^l_i In the penultimate layer of the network, which is followed by a 1×1×1 convolutional layer to output logits, we parallelly put the feature map Z∈ℛ^C× H× W× D into FPA. The output similarity map O∈ℛ^1× H× W× D is calculated by Equ. <ref>, where Z(h,w,d)∈ℛ^C denotes the feature embeddings of voxel (h,w,d), and ρ_θ∈ℛ^C the kernel parameters of FPA. O(h,w,d)=exp(-‖ Z(h,w,d)-ρ_θ‖^2) The learning process of FPA block is before testing, during which the whole model except FPA gets frozen. To reduce the additional overhead, ρ_θ is initialized by one-time calculated ρ _c and fine-tuned with loss ℒ_fpa (Equ. <ref>), where only labeled voxels will take effect in updating the kernel. ℒ_fpa=∑_h∑_w∑_dy_PVA(h,w,d)· log(O(h,w,d))/∑_h∑_w∑_dy_PVA(h,w,d) § EXPERIMENTS AND RESULTS §.§ Dataset and Evaluation Metrics Experiments are implemented on a clinical dataset, which includes 108 subjects of CCTA volumes (2:1 for training and testing). The volumes share the size of 512 × 512 × D, with D ranging from 261 to 608. PVA labels of the training set are annotated by clinicians, where only 24.29% vessels are labeled. The metrics used to quantify the results include both integrity and continuity assessment indicators. Integrity assessment indicators are Mean Dice Coefficient (Dice), Relevant Dice Coefficient (RDice) <cit.>, Overlap (OV) <cit.>; continuity assessment indicators are Overlap util First Error (OF) <cit.> on the three main trunks (LAD, LCX and RCA). §.§ Implementation Details 3D U-Net<cit.> is set as our baseline model. Experiments were implemented using Pytorch on GeForce RTX 2080Ti. Adam optimizer was used to train the models with an initial learning rate of 10^-4. The patch sizes were set as 128 × 128 × 128 and 512 × 512 × 256 respectively for 𝒮_l and 𝒮_g. When testing, sliding windows were used with a half-window width step to cover the entire volume. §.§ Comparative Test To verify the effectiveness of our proposed method, it is compared with both classic segmentation models (3D U-Net <cit.>, HRNet <cit.>, Transunet <cit.>) and partial annotation-related weakly supervised frameworks (EWPA <cit.>, DMPLS <cit.>). The quantative results of different methods are summarized in Tab. <ref>, which shows that our proposed method outperforms the competing methods under PVA. The competing frameworks (EWPA and DMPLS) had achieved the best results in their respective tasks under partial annotation, but our proposed method achieved better results for PVA-based coronary artery segmentation. It is worth mentioning that the performance in trunk continuity (measured by the indicator OF) of our proposed method using PVA (24.29% vessels labeled) is comparable to that of the baseline model using full annotation (100% vessels labeled). The qualitative visual results verify that our proposed method outperforms the competing methods under PVA. Three cases are shown in Fig. <ref>. All the cases show that the segmentation results of our method have good overall topology integrity, especially on trunk continuity. §.§ Ablation Study Ablation experiments were conducted to verify the importance of the components in our proposed framework (summarized in Tab. <ref>). The performance improvement verifies the effectiveness of pseudo label initialization (PLI) and updating (PLU) mechanisms in the label propagation unit (LPU). PLI integrates the information of PVA labels with the propagated knowledge, and PLU improves the pseudo labels during self-training. With the help of FPA block, the segmentation results gain further improvement, especially on the continuity of trunks. § CONCLUSION In this paper, we proposed partial vessels annotation (PVA) for coronary artery segmentation on CCTA images. The proposed PVA is convenient for clinical use for the two merits, providing flexibility as well as balancing efficiency and informativity. Under PVA, we proposed a progressive weakly supervised learning framework, which outperforms the competing methods and shows comparable performance in trunk continuity with the full annotation supervised baseline model. In our framework, we also designed an adaptive label propagation unit (LPU) and a learnable plug-and-play feature prototype analysis(FPA) block. LPU integrates the functions of pseudo label initialization and updating, and FPA improves vessel continuity by leveraging the similarity between feature embeddings and the feature prototype. To conclude, our proposed framework under PVA shows great potential for accurate coronary artery segmentation while requiring significantly less annotation effort. splncs04
http://arxiv.org/abs/2307.05544v1
20230708211940
Coupling high-overtone bulk acoustic wave resonators via superconducting qubits
[ "Wayne Crump", "Alpo Välimaa", "Mika A. Sillanpää" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall" ]
Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland In this work, we present a device consisting of two coupled transmon qubits, each of which are coupled to an independent high-overtone bulk acoustic wave resonator (HBAR). Both HBAR resonators support a plethora of acoustic modes, which can couple to the qubit near resonantly. We first show qubit-qubit interaction in the multimode system, and finally quantum state transfer where an excitation is swapped from an HBAR mode of one qubit, to an HBAR mode of the other qubit. Coupling high-overtone bulk acoustic wave resonators via superconducting qubits Mika A. Sillanpää =============================================================================== Hybrid quantum systems seek to combine strengths and offset weaknesses of different quantum technologies in order to improve capability beyond that of any one technology. Superconducting circuits are one of the more mature quantum technologies at this stage and have been integrated with many other systems due to the relative ease in design and fabrication as well as good coherence times <cit.>. Many different acoustic systems have been integrated with superconducting circuits such as nanomechanical oscillators <cit.>, phononic crystals <cit.>, bulk acoustic wave systems <cit.> and surface acoustic wave systems <cit.>. Acoustic resonators can offer great coherence properties <cit.> as well as smaller mode volumes due to the relation between wave velocity and wavelength, with the difficulty coming in coupling these resonators strongly with electromagnetic systems. The strong coupling of acoustic modes with superconducting qubits has resulted in many experiments exploring the quantum nature of mechanical oscillations, with experiments demonstrating number splitting <cit.>, the creation of non-classical states in the acoustic mode <cit.>, Landau-Zener-Stückelberg interference <cit.>, and entanglement <cit.>. The ability to prepare acoustic resonators in arbitrary quantum states opens up the possibility of using them in applications such as quantum memories due to their coherence properties and insensitivity to electromagnetic noise. High-overtone bulk acoustic wave resonators (HBAR) offer access to mechanical modes in the GHz regime, making them attractive for integration with superconducting qubits. The piezoelectric interaction enables coupling in the strong regime and their state to be controlled and read-out using the qubit. The system has been implemented using a 3D <cit.> and 2D <cit.> transmon architecture with part or all of the qubit capacitor directly patterned on the piezo layer of the HBAR. This was later improved in both cases by using a flip-chip design <cit.> which has lead to the current state of the art <cit.>. Experiments on these system have demonstrated the creation of non-classical multiphonon states <cit.>, demonstration of dispersive readout for a parity measurement of the mechanical mode <cit.>, and sideband control of the mechanical modes <cit.>. Work thus far has focused on coupling of a qubit and a single HBAR device supporting a set of acoustic modes. In this work we couple two complete qubit-HBAR systems together via qubit-qubit interaction, and transfer excitations within the system, including between the HBAR modes. This demonstrates the possibility of integrating multiple HBAR devices into quantum circuits enabling the exploration of much larger and complex systems. In the system there are two qubits which are coupled together as well as being individually coupled to a set of HBAR modes. The qubit-mode couplings can be described by the Jaynes-Cummings model, and the qubit-qubit coupling will be capacitive and therefore expected to take the iSWAP form <cit.>. The system as a whole can then be described by the Hamiltonian: H/ħ = ω_1/2σ_(z,1) + ω_2/2σ_(z,2) + J (σ_(+,1)σ_(-,2) + σ_(-,1)σ_(+,2)) + ∑_m [ ω_(m,1)( a_(m,1)^† a_(m,1) + 1/2) . . + g_(m,1)(a_(m,1)^†σ_(-,1) + a_(m,1)σ_(+,1))] + ∑_n [ ω_(n,2)( a_(n,2)^† a_(n,2) + 1/2) . . + g_(n,2)(a_(n,2)^†σ_(-,1) + a_(n,2)σ_(+,1))] , where ω_1 and ω_2 are the qubit frequencies, J is the qubit-qubit coupling, ω_(m,1) and ω_(n,2) are the HBAR mode frequencies corresponding to their respective qubits and g_(m,1), g_(n,2) are the couplings to the HBAR modes. The σ_i,j are the pauli operators and a_m,a_m^† are the annihilation and creation operators. In order to theoretically analyze the experiments described below, we determine the time evolution of the system using the Lindblad master equation. We include the qubits' decay and dephasing, as well as mechanical mode decay. Figure <ref> shows an optical image of the device used for the experiments. The device consists of a superconducting circuit with two qubits, each with their own readout, flux bias control and excitation lines. The qubits have a capacitive coupling to each other, as well as to the HBAR flip chip that covers both. The qubits have a round pad on the bottom arm of around 80 μm in diameter which defines the capacitive coupling to the HBAR chip. The circuit was patterned using Electron beam lithography and metalised with evaporated Aluminium. Double angle evaporation was used to create the Josephson junctions for the qubits. The HBAR flip chip consists of a 900 nm AlN piezo layer, a 250 μm sapphire layer and a 60 nm Mo layer in-between to act as a ground plane to enhance the coupling to the mechanical modes <cit.>. The HBAR was placed by hand onto the circuit chip and glued with standard epoxy. The qubit frequencies can be tuned in the range 3.7-4.5 GHz and have readout resonator frequencies of 6.230 GHz and 6.013 GHz. The operating points of the qubits were chosen to maximise their coherence properties and hence they are operating at or close to their minimum frequencies, as shown in fig:overview. The bottom two plots of figure <ref> show two-tone measurements sweeping the qubit frequencies in the neighbourhood of their operating frequencies chosen for later experiments. The operating frequency of qubit 1 was set near its minimum at ω_1,OP/2π = 3.7778 GHz and qubit 2 at its minimum at ω_2,OP/2π = 3.6673 GHz. The many small anticrossings occur when a qubit is sweeping past an HBAR mode, while the larger anticrossing at 3.778 GHz seen in the data for qubit 2 corresponds to the qubit-qubit coupling. The spacing between HBAR modes (free spectral range, FSR) is around 22 MHz which corresponds well with the thickness of the HBAR sapphire layer. The dashed lines show the eigenvalues according to equation <ref>. At the qubits respective operating points, they had T_1 values of 2.2 μs and 2.41 μs, as well as T_2 values of 4.41 μs and 1.02 μs. Their respective 2g couplings to their HBAR modes were 2.55 MHz and 2.85 MHz, with the mechanical T_1 values being 380 ns and 320 ns. The system had a qubit-qubit 2g coupling of 16.7 MHz. Figure <ref> shows a vacuum Rabi oscillation experiment where an excitation is swapping between an initially excited qubit and its coupled mechanical modes. In panels (a,b) qubit 2 is being controlled and measured and we see vacuum Rabi oscillations with the mechanical modes (red arrows) and also with the other qubit (blue arrows), corresponding with the anticrossings seen in figure <ref> bottom right. In figure <ref> (c,d) qubit 1 is controlled and experiences vacuum Rabi oscillations with its coupled mechanical modes following the anticrossings seen in figure <ref> bottom left. Since the flux is tuned in the positive direction, it first sweeps on resonance with the lower mode and then with the upper mode seen in figure <ref> bottom right. If one looks closely the vacuum Rabi oscillation fringes can be seen to be asymmetric, especially in figure <ref> (a). The source of this unknown and it results in deviations from the theory at later simulation times. Some slight asymmetry could be generated for the nearest mode by including the effect of the π pulse specifically in the simulations, but this was not enough to reproduce the long tail of the fringes from the mode nearest the qubit operation point seen in figure <ref> (a) which extend very far, up to where qubit 1 is. It can also be seen in figure <ref> (a) that the vacuum Rabis with qubit 1 also show these extended fringes on the right side. This behaviour may be related to the same phenomena that is seen in frequency domain, where at the avoided crossing, the upper branch has less weight than the lower branch. It is possible at least some of the asymmetry is caused by pulse distortion <cit.>. The line cuts in figure <ref> (b) show a double oscillation feature that occurs when qubit 2 is near the qubit 1 frequency. This is because the excitation is experiencing Rabi oscillations with both the other qubit and the nearby acoustic modes at the same time but on different time scales, hence the multiple oscillating feature. It is important to determine whether or not the qubits couple to the same set of acoustic modes. The issue is nontrivial since on one hand, the qubits are in close proximity to each other and share the same HBAR chip, which would point to delocalized acoustic modes. On the other hand, one could argue that the electric field of either qubit should confine the HBAR mode only below its own electrode. We attempted to carry out finite-element simulations, however, full 3-dimensional solution was beyond reach. In 2 dimensions and with a limited overtone number, we saw indications of a delocalized acoustic mode, with the study showing that moving the qubit coupling pad changed the strength of coupling to modes of different lateral profile. Experimentally, the issue cannot immediately be resolved in spectroscopy, since the HBAR spectral lines in figure <ref> are equal within measurement uncertainties, which however is expected based on the geometry. A time-domain experiment was done to confirm that the qubits couple to their individual sets of acoustic modes. This was done by swapping an excitation from qubit 1 to its acoustic mode at 3.788 GHz, and then tuning it away whilst tuning the qubit 2 on resonance with this mode. The experiment found no response and so concluded that the qubits indeed couple to separate modes with any stray coupling being too weak to observe. Finally, we demonstrate the swapping of an excitation through the degrees of freedom of the system. Figure <ref> shows the pulse sequence and measured data. The excitation swaps from the 3.7885 GHz HBAR mode coupled to qubit 1 all the way to various HBAR modes coupled to qubit 2. The resulting measurement data is similar to figure <ref> (a) as the last part of the pulse sequence is similar to that experiment, however this excitation has travelled from an acoustic mode coupled to the opposite qubit hence why the initial excited state population is reduced due to decoherence. Now that we have shown the ability to transfer excitations around the system, we would in principle be able to create an entangled state between arbitrary acoustic modes. However, due to the limited coherence of the system, we were not able to measure this in practice. One needs to measure the entangled modes simultaneously under a series of tomography pulses in order to produce the density matrix of the system (for example see <cit.>). This was not straightforward to do in our system as the acoustic modes are coupled to different qubits, meaning we need to readout the acoustic mode in single-shot to be able to correlate the results. We are limited both by our single-shot readout fidelity <60%, and by not being in the strong dispersive regime which requires acoustic T_1 times of 8 μs at our coupling magnitudes. A possible simplification to make is to only measure an entangled state which does not occupy number states higher than |1⟩ so that in this case one can swap the state back to the qubits and measure them. Due to the low readout fidelity, we have to use an ensemble measurement. There is a tomography pulse scheme to measure the two qubit density matrix using an ensemble measurement <cit.>. This requires an appropriate two-qubit gate as a part of the tomography pulse scheme and in our case this would be an iSWAP pulse. The calibration of this iSWAP pulse was problematic having a fidelity of 55% which was not sufficient to do the two qubit tomography. We estimate that probably higher than 70% gate fidelity is required to be able to perform the measurement. In order to improve the fidelity of single and two-qubit gates in the system, one would like the FSR to be larger than the coupling by a factor of at least 20. This is so that if the qubit is in between two modes, it will only interact dispersively. Also the FSR should be larger than inverse pulse widths, so that these are not exciting nearby mechanical modes as well. Longer coherence times for both the qubits and the acoustics are important towards this end. The ideal solution would be the development of a tunable coupler, to be able to selectively couple to modes of interest, which is important for using HBARs in quantum information processing. In conclusion we have fabricated and measured a sample consisting of two qubits each coupled to an individual set of high overtone bulk acoustic (HBAR) modes as well as to each other. An excitation was swapped from an HBAR mode coupled with one qubit, to an HBAR mode coupled to the other qubit. This demonstrates the possibility to integrate multiple HBAR devices into a superconducting circuit, where complex quantum states could be stored across these devices. We would like to thank Mikael Kervinen for useful discussion. We acknowledge the facilities and technical support of Otaniemi research infrastructure for Micro and Nanotechnologies (OtaNano) that is part of the European Microkelvin Platform. This work was supported by the Academy of Finland (contracts 307757), by the European Research Council (101019712), and by the Wihuri Foundation. We acknowledge funding from the European Union's Horizon 2020 research and innovation program under the QuantERA II Programme (13352189). The work was performed as part of the Academy of Finland Centre of Excellence program (project 336810). 20 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Clerk et al.(2020)Clerk, Lehnert, Bertet, Petta, and Nakamura]Clerk2020hybrid author author A. A. Clerk, author K. W. Lehnert, author P. Bertet, author J. R. Petta, and author Y. Nakamura, title title Hybrid quantum systems with circuit quantum electrodynamics, @noop journal journal Nature Physics volume 16, pages 257–267 (year 2020)NoStop [Regal et al.(2008)Regal, Teufel, and Lehnert]Lehnert2008Nph author author C. A. Regal, author J. D. Teufel, and author K. W. Lehnert, title title Measuring nanomechanical motion with a microwave cavity interferometer, @noop journal journal Nature Physics volume 4, pages 555–560 (year 2008)NoStop [Teufel et al.(2011)Teufel, Li, Allman, Cicak, Sirois, Whittaker, and Simmonds]Teufel2011a author author J. D. Teufel, author Dale Li, author M. S. Allman, author K. Cicak, author A. J. Sirois, author J. D. Whittaker, and author R. W. Simmonds, title title Circuit cavity electromechanics in the strong-coupling regime, @noop journal journal Nature volume 471, pages 204–208 (year 2011)NoStop [O'Connell et al.(2010)O'Connell, Hofheinz, Ansmann, Bialczak, Lenander, Lucero, Neeley, Sank, Wang, Weides, Wenner, Martinis, and Cleland]OConnell2010 author author A. D. O'Connell, author M. Hofheinz, author M. Ansmann, author Radoslaw C. Bialczak, author M. Lenander, author Erik Lucero, author M. Neeley, author D. Sank, author H. Wang, author M. Weides, author J. Wenner, author John M. Martinis, and author A. N. Cleland, title title Quantum ground state and single-phonon control of a mechanical resonator, @noop journal journal Nature volume 464, pages 697–703 (year 2010)NoStop [Arrangoiz-Arriola et al.(2019)Arrangoiz-Arriola, Wollack, Wang, Pechal, Jiang, McKenna, Witmer, Van Laer, and Safavi-Naeini]Safavi2019Fock author author Patricio Arrangoiz-Arriola, author E. Alex Wollack, author Zhaoyou Wang, author Marek Pechal, author Wentao Jiang, author Timothy P. McKenna, author Jeremy D. Witmer, author Raphaël Van Laer, and author Amir H. Safavi-Naeini, title title Resolving the energy levels of a nanomechanical oscillator, @noop journal journal Nature volume 571, pages 537–540 (year 2019)NoStop [Chu et al.(2017)Chu, Kharel, Renninger, Burkhart, Frunzio, Rakich, and Schoelkopf]SchoelkopfHBAR2017 author author Yiwen Chu, author Prashanta Kharel, author William H. Renninger, author Luke D. Burkhart, author Luigi Frunzio, author Peter T. Rakich, and author Robert J. Schoelkopf, title title Quantum acoustics with superconducting qubits, @noop journal journal Science volume 358, pages 199–202 (year 2017)NoStop [Kervinen et al.(2018)Kervinen, Rissanen, and Sillanpää]kervinen_interfacing_2018 author author Mikael Kervinen, author Ilkka Rissanen, and author Mika Sillanpää, title title Interfacing planar superconducting qubits with high overtone bulk acoustic phonons, @noop journal journal Physical Review B volume 97, pages 205443 (year 2018)NoStop [Gustafsson et al.(2014)Gustafsson, Aref, Kockum, Ekström, Johansson, and Delsing]Delsing2014 author author Martin V. Gustafsson, author Thomas Aref, author Anton Frisk Kockum, author Maria K. Ekström, author Göran Johansson, and author Per Delsing, title title Propagating phonons coupled to an artificial atom, @noop journal journal Science volume 346, pages 207–211 (year 2014)NoStop [Noguchi et al.(2017)Noguchi, Yamazaki, Tabuchi, and Nakamura]Nakamura2017 author author Atsushi Noguchi, author Rekishu Yamazaki, author Yutaka Tabuchi, and author Yasunobu Nakamura, title title Qubit-assisted transduction for a detection of surface acoustic waves near the quantum limit, @noop journal journal Phys. Rev. Lett. volume 119, pages 180505 (year 2017)NoStop [Moores et al.(2018)Moores, Sletten, Viennot, and Lehnert]moores_cavity_2018 author author Bradley A. Moores, author Lucas R. Sletten, author Jeremie J. Viennot, and author K. W. Lehnert, title title Cavity Quantum Acoustic Device in the Multimode Strong Coupling Regime, @noop journal journal Physical Review Letters volume 120, pages 227701 (year 2018)NoStop [Bienfait et al.(2019)Bienfait, Satzinger, Zhong, Chang, Chou, Conner, Dumur, Grebel, Peairs, Povey, and Cleland]Cleland2019PhEntangl author author A. Bienfait, author K. J. Satzinger, author Y. P. Zhong, author H.-S. Chang, author M.-H. Chou, author C. R. Conner, author É. Dumur, author J. Grebel, author G. A. Peairs, author R. G. Povey, and author A. N. Cleland, title title Phonon-mediated quantum state transfer and remote qubit entanglement, @noop journal journal Science volume 364, pages 368–371 (year 2019)NoStop [Gokhale et al.(2020)Gokhale, Downey, Katzer, Nepal, Lang, Stroud, and Meyer]gokhale_epitaxial_2020 author author Vikrant J. Gokhale, author Brian P. Downey, author D. Scott Katzer, author Neeraj Nepal, author Andrew C. Lang, author Rhonda M. Stroud, and author David J. Meyer, title title Epitaxial bulk acoustic wave resonators as highly coherent multi-phonon sources for quantum acoustodynamics, @noop journal journal Nature Communications volume 11, pages 2314 (year 2020)NoStop [Chu et al.(2018)Chu, Kharel, Yoon, Frunzio, Rakich, and Schoelkopf]chu_creation_2018 author author Yiwen Chu, author Prashanta Kharel, author Taekwan Yoon, author Luigi Frunzio, author Peter T. Rakich, and author Robert J. Schoelkopf, title title Creation and control of multi-phonon Fock states in a bulk acoustic-wave resonator, @noop journal journal Nature volume 563, pages 666–670 (year 2018)NoStop [Kervinen et al.(2019)Kervinen, Ramírez-Muñoz, Välimaa, and Sillanpää]kervinen2019landau author author Mikael Kervinen, author Jhon E. Ramírez-Muñoz, author Alpo Välimaa, and author Mika A. Sillanpää, title title Landau-Zener-Stückelberg Interference in a Multimode Electromechanical System in the Quantum Regime, @noop journal journal Phys. Rev. Lett. volume 123, pages 240401 (year 2019)NoStop [Wollack et al.(2022)Wollack, Cleland, Gruenke, Wang, Arrangoiz-Arriola, and Safavi-Naeini]Wollack2022entangle author author E. Alex Wollack, author Agnetta Y. Cleland, author Rachel G. Gruenke, author Zhaoyou Wang, author Patricio Arrangoiz-Arriola, and author Amir H. Safavi-Naeini, title title Quantum state preparation and tomography of entangled mechanical resonators, @noop journal journal Nature volume 604, pages 463–467 (year 2022)NoStop [Kervinen et al.(2020)Kervinen, Välimaa, Ramírez-Muñoz, and Sillanpää]Kervinen2020 author author Mikael Kervinen, author Alpo Välimaa, author Jhon E. Ramírez-Muñoz, and author Mika A. Sillanpää, title title Sideband control of a multimode quantum bulk acoustic system, @noop journal journal Phys. Rev. Applied volume 14, pages 054023 (year 2020)NoStop [von Lüpke et al.(2022)von Lüpke, Yang, Bild, Michaud, Fadel, and Chu]Lupke2022 author author Uwe von Lüpke, author Yu Yang, author Marius Bild, author Laurent Michaud, author Matteo Fadel, and author Yiwen Chu, title title Parity measurement in the strong dispersive regime of circuit quantum acoustodynamics, @noop journal journal Nature Physics volume 18, pages 794–799 (year 2022)NoStop [Kwon et al.(2021)Kwon, Tomonaga, Lakshmi Bhai, Devitt, and Tsai]Kwon2021GateBased author author Sangil Kwon, author Akiyoshi Tomonaga, author Gopika Lakshmi Bhai, author Simon J. Devitt, and author Jaw-Shen Tsai, title title Gate-based superconducting quantum computing, @noop journal journal Journal of Applied Physics volume 129, pages 041102 (year 2021)NoStop [Rol et al.(2020)Rol, Ciorciaro, Malinowski, Tarasinski, Sagastizabal, Bultink, Salathe, Haandbaek, Sedivy, and DiCarlo]Rol2020PulseDistorsion author author M. A. Rol, author L. Ciorciaro, author F. K. Malinowski, author B. M. Tarasinski, author R. E. Sagastizabal, author C. C. Bultink, author Y. Salathe, author N. Haandbaek, author J. Sedivy, and author L. DiCarlo, title title Time-domain characterization and correction of on-chip distortion of control pulses in a quantum processor, @noop journal journal Applied Physics Letters volume 116, pages 054001 (year 2020)NoStop [Li et al.(2017)Li, Xue, Tan, Liu, Dai, Zhang, Yu, and Yu]Li2017Ensemble author author Mengmeng Li, author Guangming Xue, author Xinsheng Tan, author Qiang Liu, author Kunzhe Dai, author Ke Zhang, author Haifeng Yu, and author Yang Yu, title title Two-qubit state tomography with ensemble average in coupled superconducting qubits, @noop journal journal Applied Physics Letters volume 110, pages 132602 (year 2017)NoStop
http://arxiv.org/abs/2307.04047v1
20230708211641
Calibration-Aware Margin Loss: Pushing the Accuracy-Calibration Consistency Pareto Frontier for Deep Metric Learning
[ "Qin Zhang", "Linghan Xu", "Qingming Tang", "Jun Fang", "Ying Nian Wu", "Joe Tighe", "Yifan Xing" ]
cs.CV
[ "cs.CV" ]
Calibration-Aware Margin Loss: Pushing the Accuracy-Calibration Consistency Pareto Frontier for Deep Metric Learning Qin Zhang^*,1, Linghan Xu^*,1, Qingming Tang^2, Jun Fang^1, Ying Nian Wu^1, Joe Tighe^1, Yifan Xing^1 ^1 AWS AI Labs ^2 Alexa AI {qzaamz, linghax, qmtang, junfa, wunyin, tighej, yifax}@amazon.com ================================================================================================================================================================================================================= *Equal contribution. empty The ability to use the same distance threshold across different test classes / distributions is highly desired for a frictionless deployment of commercial image retrieval systems. However, state-of-the-art deep metric learning losses often result in highly varied intra-class and inter-class embedding structures, making threshold calibration a non-trivial process in practice. In this paper, we propose a novel metric named Operating-Point-Incosistency-Score (OPIS) that measures the variance in the operating characteristics across different classes in a target calibration range, and demonstrate that high accuracy of a metric learning embedding model does not guarantee calibration consistency for both seen and unseen classes. We find that, in the high-accuracy regime, there exists a Pareto frontier where accuracy improvement comes at the cost of calibration consistency. To address this, we develop a novel regularization, named Calibration-Aware Margin (CAM) loss, to encourage uniformity in the representation structures across classes during training. Extensive experiments demonstrate CAM's effectiveness in improving calibration-consistency while retaining or even enhancing accuracy, outperforming state-of-the-art deep metric learning methods. § INTRODUCTION Deep metric learning (DML) learns a discriminative representation via a deep neural network to align the distances between embeddings to semantic similarities such that visually similar samples are close to each other and dis-similar samples are far apart. Given the massive success of DML on visual recognition tasks <cit.>, a natural challenge arises in making the algorithms more robust in their performance against different seen and unseen test classes such that a single distance threshold can be used for any test dataset without sophisticated post-training calibration. Common DML losses such as contrastive loss <cit.>, triplet loss <cit.> and proxy-based losses <cit.> suffer from the problem of threshold inconsistency across different classes, as they implicitly optimize the distance threshold based on the “semantic" similarities, whose definition may vary from class to class. Consequently, even if an embedding model has strong separability, different classes may still require different distance thresholds to maintain a consistent operating point in false reject rate (FRR) and false acceptance rate (FAR). Such a problem is more pronounced in real-world testing environments where both the test classes and test distributions are unknown. There are two main causes for this threshold inconsistency problem. First, the model is usually estimated over a training population, and may not properly characterize the testing population in the presence of domain mismatch, co-variate and diversity shift <cit.>, as well as extension to the open set and open world <cit.>. Second, there can be high variance in intra-class compactness and inter-class separation across both training and testing populations, as observed in <cit.>, even when the training distribution accurately characterizes the test distribution. We abstract this phenomenon in DML that different classes require different distance thresholds to achieve a similar retrieval or recognition accuracy as calibration inconsistency. Unlike calibration for closed-set classification which focuses on making the predicted confidence probability match the empirical correctness <cit.>, the calibration in DML refers to finding a transformation of the embedding distance to achieve target operating points in FAR and FRR. As DML aims at fine-grained recognition with the requirement of generalization to open-world unseen test-time classes, the calibration inconsistency problem becomes increasingly relevant for model evaluation, threshold selection, and broader concerns about robustness, fairness and bias. Traditional calibration methods such as Platt calibrations <cit.> or isotonic regression <cit.> use a calibration dataset to calibrate the distance measurements to achieve target operating points for a trained embedding model. However, such methods are unscalable as the hand-crafting of calibration sets  <cit.> is highly costly and requires knowledge of the test distribution. To mitigate, we wish to learn a calibration-consistent metric space during the embedding model training. Note that in this paper, our goal is not to unify calibration-aware training and post-hoc model calibration, since the two are complimentary to each other and cannot replace one another. Instead, we focus on calibration-aware training as it has the potential to improve both accuracy and calibration consistency concurrently. In this work, we introduce the following key insights. First, we quantify the notion of calibration inconsistency in DML by proposing a novel metric, called Operating-Point-Inconsistency-Score (OPIS), which measures the variance in the operating characteristics across different classes in the target calibration range. In addition, we find that the calibration inconsistency problem cannot be resolved with higher model accuracy. As illustrated in <ref>, there exists an accuracy-calibration consistency Pareto frontier in the high accuracy regime where the calibration consistency starts to deteriorate with increased accuracy. To address this, we propose a novel hinge-loss-based regularization named Calibration-Aware Margin loss (CAM). CAM introduces two margin-based constraints, one each for a regularization over the positive and negative sample pairs respectively, as well as a simple “attention" mechanism to focus on the hard pairs only. These mechanisms effectively prevent excessive class compactness and over-separation between classes. Therefore, the intra-class and inter-class embedding structures become less dependent on the label, leading to more consistent thresholds across classes. We evaluate the proposed OPIS calibration inconsistency metric and CAM regularization over three image retrieval tasks, covering data domains of nature species, birds and cars. We find the phenomenon of accuracy-calibration consistency trade-off to be a common issue across all three domains. With CAM, we outperform state-of-the-art (SoTA) DML methods in both calibration consistency and retrieval accuracy. In particular, on iNaturalist <cit.>, the largest image retrieval benchmark, we reduce the OPIS calibration inconsistency score from 3.7e-4 to 1.8e-4 while improving retrieval Recall@1 from 84.0% to 85.1%. To summarize, we make the following contributions: (i) We formalize the notion of calibration inconsistency in DML, and develop a novel OPIS metric to quantify this property; (ii) We evaluate the OPIS metric over various DML losses, and identify for the first time, an accuracy-calibration consistency Pareto frontier; (iii) To improve calibration consistency with training, we propose a novel CAM regularization which boosts the performance of SoTA methods on a variety of image retrieval tasks in both calibration consistency and accuracy; (iv) We find that we can further improve accuracy by combining CAM with class-adaptive weights approximated by the vMF concentration <cit.>. § RELATED WORKS Calibration Inconsistency in DML The advancement in DML has been focused on accuracy, generalization and scalability. The Smooth-AP loss <cit.> is a ground-breaking work that optimizes a smoothed approximation for the non-differentiable average precision. Similar to Smooth-AP, the Recall@k Surrogate loss <cit.> (L_RS@k) approximates recall@k – the standard metrics for evaluating image retrieval methods. Using vision-transformer architectures and a very large batch size (=4000), L_RS@k achieves SoTA performance in several large-scale image retrieval benchmarks <cit.>. However, when the number of classes is very large (e.g. face recognition), these pairwise methods become prohibitively inefficient. To reduce the computational complexity associated with large class numbers, proxy-based approaches such as <cit.> are commonly employed where sample representations are compared against class prototypes. During inference, it is a common practice to normalize the backbone embeddings to lie on the unit hypersphere <cit.> so that its metric space can be directly analyzed by measurements such as the cosine similarity, although earlier works in DML also used other metrics such as the Mahalanobis distance <cit.> or distance metrics learned from data <cit.>. While these methods have achieved good accuracy, they are prone to bias <cit.> and poor calibration consistency in production settings. To illustrate this, we give a qualitative example of the non-uniformity in embedding structures across classes, which is the root cause of calibration inconsistency. We train a shallow CNN on a random subset of the MNIST dataset <cit.> using the Arcface <cit.> loss with a feature dimension of three, and use the rest of the dataset for testing. As is shown in <ref>, the class centroid distribution is far from uniform with varying representation compactness across classes. For example, digits 4, 8, 9 are very close to each other, while digit 1 is far from the rest. Meanwhile, the embedding space is not fully utilized – nearly half of the space appears to be wasted. In <ref>, we further show that high accuracy does not guarantee calibration consistency by visualizing the utility to distance curves for test classes in the CUB-200 dataset <cit.>. The utility score is defined in <ref> as the F_1 score based on specificity and sensitivity. As illustrated, higher accuracy does not guarantee better calibration consistency (e.g., ProxyNCA <cit.> has better retrieval accuracy in recall@1 than Smooth-AP <cit.>, yet the consistency in the operating characteristics across classes appears to be worse). This indicates that high accuracy does not guarantee good calibration consistency in DML. Nevertheless, there have been few works in literature that study this problem. Calibration-Aware Training. Though calibration-aware training is underexplored in DML, it has been widely studied in classification and regression tasks. Common approaches use regularization to push the model update toward calibrated results like the confidence penalty  <cit.>, the DCA term penalty  <cit.> and the Multi-Class Difference in Confidence and Accuracy loss  <cit.>. A recent work <cit.> revisits the focal loss by introducing adaptiveness into the γ parameter to prevent over-confident predictions and improve the overall calibration. In the DML domain, a recent study <cit.> proposes the Cross-Example Negative Mining loss (L_CENM) to improve global score calibration for the learnt embedding by combining threshold relevancy and top-k relevancy, with an application to document-retrieval systems. To our knowledge, it is the first loss function tailored to improving threshold calibration consistency in DML. However, the CENM loss is prone to sub-optimality and convergence issues if k is not properly selected. Additionally, in face recognition applications, <cit.> proposes a false positive rate penalty loss to mitigate bias across different demographic groups. <cit.> also proposes the Threshold Consistency Penalty to improve the consistency in the thresholds across different domains of face images, which is shown to improve the model performance under the single-threshold evaluation protocol. Nonetheless, <cit.> requires the construction of a large feature queue to ensure sufficient negative pairs for different domains, which can be impractical for fine-grained visual recognition where the number of “domains" can be very large. Meanwhile, as they are intended for face recognition, both <cit.> and <cit.> focus on controlling only FAR, which limits their applicability to other areas where recall may be important. Metrics for Calibration Calibration measures how much one can trust a model’s predictions. Since <cit.>, many quantitative metrics have been proposed for confidence calibration of classification models. Expected Calibration Error  <cit.> is one of the most popular metrics. It indicates the level of miscalibration by taking the average L1 distance between the DNN output maximum prediction and the actual accuracy over a validation set. Maximum Calibration Error <cit.> measures the maximum discrepancy instead of the expectation, and is preferred for safety-critical applications. However, both metrics suffer from issues such as failing to condition on the class or assess all the predictions a model makes, which in practice may lead to conflicting conclusions. Nixon et al <cit.> conducted a comprehensive study and proposed several solutions to address these flaws. Their recommended approach combines the L_2 norm with class conditioning and adaptive binning to tackle the non-uniform data dispersion across probability ranges, which is shown to have more consistent metric rank ordering across various datasets. However, metrics for calibration threshold inconsistency in DML is still largely underexplored. § QUANTIFY CALIBRATION CONSISTENCY IN DML Operating-Point-Inconsistency Score. Despite commonalities in thresholding, class conditionality and variance-bias trade-off <cit.>, metrics defined for confidence calibration in classification <cit.> cannot be directly applied to measure calibration in DML. The reason is that the former produces a probability that can be compared to the empirical frequency of correctness while the latter outputs a distance for semantic similarity that is intrinsically non-probabilistic, due to the ambiguity in semantic similarity across classes. <cit.> introduced the calibration threshold for face recognition systems, which corresponds to the distance threshold at a given overall FAR for a calibration dataset. While this notion links the calibration threshold with the overall FAR, it fails to measure the consistency in the operating characteristics across different classes that cover both sensitivity (FRR) and specificity (FAR). To address this, we formally define a utility measure for accuracy as a function of the distance threshold d. Let ϕ be one side of the accuracy metric (e.g. precision or specificity), and ψ be the other side (e.g. recall or sensitivity). Analogous to the commonly-used F_β metric <cit.>, assuming one is willing to trade 1 unit of ϕ for c unit of ψ (c=1 if not specified), we can summarize the two metrics into one utility score U by the harmonic mean, as defined below: U(d) = (1+c^2)·ϕ(d) ·ψ(d)/c^2ϕ(d)+ψ(d) This utility score is a concave function whose value ranges from 0 (perfectly wrong) to 1 (perfectly accurate). We consider the L_2 distance on a unit hypersphere as the distance metric, which gives [0,2] as the global calibration range. On a unit hypersphere, the pair-wise L_2 distance and cosine similarity are one-to-one bijective. Without loss of generality, we let ϕ be specificity and ψ be sensitivity as they are not only more relevant for visual recognition systems but also less sensitive to test data composition. Per use case, there can be lower / upper bound requirement on the recognition accuracy that determines the calibration range, denoted as [d^min, d^max]. Note that when the requirement is measured over a calibration set at one specific target FAR, this calibration range is equivalent to the calibration threshold defined in <cit.>. Equipped with these definitions, we propose the Operating-Point-Inconsistency Score (OPIS) to quantify the variance in the utility curves across test classes in the calibration range as follows: OPIS=∑_i=1^T∫_d^min^d^maxw_i· ||U_i(d)-U̅(d)||^2 dd/∑_i=1^T w_i · (d^max-d^min) where i=1,2,...,T is the index for the test classes, w_i is the class weight (we let w_i=1 for simplicity), and U̅(d) is the average utility score for the entire test dataset. We highlight the importance of the OPIS metric by comparing it to the commonly-used accuracy metric in image retrieval tasks, recall@k. While recall@k focuses on top-k relevancy, OPIS emphasizes threshold-relevancy, which is often preferred in commercial image retrieval systems for its robustness against unknown test distributions. In addition, OPIS is defined over both FAR and FRR, while recall@k fails to capture FAR, making it less desirable for safety-critical applications (e.g., where top-k retrieved samples may contain offensive or illegal contents). As quality assessment needs to be multi-dimensional, OPIS should be used orthogonally to recall@k as an additional guard rail for model evaluation, as illustrated in <ref>. For example, when comparing two models A and B, if B's recall@k is higher and OPIS is lower, then B is better than A in both accuracy and calibration consistency. However, if B's recall@k and OPIS are both higher than A, then B has worse calibration consistency than A, despite its higher accuracy. ϵ-OPIS for Utility Divide in a Dataset The overall OPIS metric does not emphasize on the outlier classes. For applications where outlier threshold calibration consistency is essential, we provide a more fine-grained metric in extension to overall OPIS that focuses on the utility inequality between the best and worst sub-groups at a given distance threshold. We define the expected utility of the ε percentile of best-performing classes as follows: U_ε_best(d) = ϕ_ε_best(d) ·ψ_ε_best(d)/ϕ_ε_best(d)+ψ_ε_best(d) where ϕ_ε_best(d) and ψ_ε_best(d) are the accuracy metrics calculated for the entirety of the ε percentile of the best-performing classes. By replacing ε_best in <ref> with ε_worst, the same can be defined for U_ε_worst(d) which accounts for the ε percentile of the worst-performing classes. Then, we define the ε-OPIS metric as the following: ε-OPIS = ∫_d^min^d^max ||U_ε_worst(d)- U_ε_best(d)||^2 dd/d^max-d^min By definition, the ε-OPIS metric is maximized at ε→ 0, and eventually becomes zero when ε→ 100% as the best-performing set and worst-performing set become identical. § TOWARDS CALIBRATION-CONSISTENT DML We propose our calibration-aware training framework using a Calibration-Aware Margin (CAM) regularization to improve calibration consistency across different classes during training, as illustrated in <ref>. CAM can be combined with any commonly-used base loss to reduce the trade-off between accuracy and calibration. In the following, we discuss the details of CAM loss as well as its adaptive variant. §.§ Calibration-Aware Margin Loss To disambiguate the distance thresholds across different classes, we propose the CAM regularization, which explicitly penalizes hard positive sample pairs (whose cosine similarity is less than a certain positive margin) and hard negative sample pairs (whose cosine similarity is greater than a certain negative margin). Let S^+ and S^- be the sets of cosine similarity scores for positive pairs and negative pairs in a mini-batch, and |S^m^+| and |S^m^-| be the number of positive and negative pairs selected given m^+ and m^-, respectively. The CAM regularizer can then be written as: L_CAM = λ^+·∑_s∈ S^+1_s ≤ m^+ (m^+-s)/|S^m^+| + λ^-·∑_s∈ S^-1_s ≥ m^- (s-m^-)/|S^m^-| where 1_ condition =1 if condition is true, and 0 otherwise, λ^+ and λ^- are the weights of positive and negative regularization, and m^+, m^- are cosine margins for positive and negative pairs, respectively. This regularizer can be combined with any base loss L_base, yielding the final objective: L_final = L_base + L_CAM Analysis. Our CAM loss is different from contrastive loss as it does not aim to bring all similar samples closer and dissimilar samples far apart. Instead, it penalizes positive pairs that are too dissimilar and negative pairs that are too similar. CAM is also different from the margin-based softmax losses such as  <cit.> in several ways, as illustrated in <ref>. First, designed as a regularization that functions on top of a base loss, CAM only applies to the hard sample pairs (positive or negative) near the margin boundaries, defined by m^+ and m^-, via the indicator functions which act as a simple “attention" mechanism. This sampling mechanism differs from the semi-hard negative mining strategy <cit.> as well as its variants <cit.> because the sampling strategy in CAM is defined based on the absolute values of L_2 distance of the positive and negative pairs, respectively, instead of their relative differences. Second, CAM uses two margin parameters to regularize both the intra-class and inter-class distance distributions, which captures both hard positive and negative examples and therefore generates more hard pairs within a mini-batch. Finally, CAM is a pair-wise loss, which is better at capturing sample-to-sample relationship compared to proxy-based methods. Thus, the resulting metric space has a more equidistant class centroid distribution with improved uniformity in the representation compactness across different classes. Together, these factors create more consistent distance thresholds across different classes by actively preventing the formation of excessively compact classes and over-separation between classes. Complexity. In a mini-batch with size n, the complexity of the CAM loss is 𝕆(n^2) as it compares every sample with all samples in the mini-batch. For large-scale image benchmarks where the number of training classes (K) is significantly greater than the batch size (K ≫ n), this complexity is comparable to or even less than most proxy-based (𝕆(nK)) or pair-based losses. For instance, the largest batch size used in literature is 4000 as in <cit.>, which is still less than the number of classes in iNaturalist <cit.> (=5690). §.§ Class-Adaptive Margin Many studies have introduced adaptiveness in the training objective using a variety of indicators  <cit.>. From a slightly different angle, we argue that class-level representation compactness should be another important factor for adaptiveness. Motivated by this, we introduce the class-adaptive CAM regularization (L_AdaCAM) based on the class compactness approximated by a von Mises-Fisher (vMF) <cit.> distribution characterized by a concentration parameter, κ. The higher the concentration, the more compact a class is. A widely-used approximation of κ is Sra's method <cit.> which takes the following form: κ_j =R̅(M-R̅^2)/(1-R̅^2) where R̅=∑_i=1^n_j f_i/n_j is the norm of the average embedding (f) for class j containing n_j samples. The estimated κ is transformed into a class compactness score z_j=2κ_j-κ_min-κ_max/κ_max-κ_min, where κ_min, κ_max are pre-defined normalization constants for κ. Then, the adaptive CAM (AdaCAM) loss, can be derived by replacing m^+ in <ref> with a class-adaptive m_j^+ while keeping the negative regularization fixed across all classes, expressed as follows: m_j^+=m^+· w_j^vMF/𝔼_j [w_j^vMF] where w_j^vMF=1/1+e^z_j is the class-adaptive scale that gives smaller positive margins for classes with higher κ. Analysis. AdaCAM further exploits the potential for accuracy improvement by relaxing the positive margin class-adaptively according to the vMF model. With this relaxed constraint in m^+, we expect a minor trade-off between calibration consistency and accuracy, as shown in <ref>. Complexity. We do not train with AdaCAM from scratch as the vMF model requires high embedding quality to yield a meaningful approximation. Instead, after a model is trained with L_base+L_CAM, we fine-tune it with L_base+L_AdaCAM for 30 epochs at a small learning rate of 1e-6. For memory efficiency, given R̅'s additive nature in <ref>, we progressively update a dictionary for average representation per class after each forward pass, which takes an additional memory of 𝕆(KM) where M is the embedding dimension. At the end of every epoch, we compute κ for each class all at once, leading to a negligible overhead in overall memory. § EXPERIMENTS We benchmark our methodology over a variety of large-scale image retrieval benchmarks including cars, birds, and nature species, using different base losses and DNN backbones. First, we give detailed ablation studies to justify our design choices. We then demonstrate the advantages of our CAM and AdaCAM regularizations in concurrently boosting calibration consistency and accuracy through large-scale image retrieval experiments. §.§ Dataset and Implementation Details Datasets. We use commonly-used image retrieval benchmarks including iNaturalist-2018 (iNat) <cit.>, CUB-200-2011 (CUB) <cit.> and Cars-196 (Cars) <cit.>. In particular, the iNaturalist dataset follows the open-set train-test-split where the training classes are disjoint to the test classes. The details of the datasets are listed in <ref>. For evaluation, we report recall@k for accuracy, and use OPIS and ϵ-OPIS defined in <ref> for calibration consistency. In line with <cit.>, we estimate calibration consistency using normalized features of image pairs in 1:1 comparisons. Due to the large number of classes in iNaturalist, instead of exhaustive sampling of all pairs, we only sample positive pairs exhaustively and sample negative pairs randomly with a fixed negative-to-positive ratio of 10-to-1 for each class. All pairs in CUB and Cars are exhaustively sampled. Implementation details. We consider both ResNet50<cit.> and the Vision Transformer <cit.> backbones. Following <cit.>, the ResNet50 is pretrained on ImageNet <cit.>. For the Vision Transformers (ViT), we follow <cit.> and use ImageNet-21k initialization from the timm <cit.> library. Since the original papers do not report the OPIS metric, we train both baseline models (without CAM) and CAM-regularized models using the same set-up. All of the hyper-parameters for each base loss are taken from the original papers. For CAM, we set λ^+ = λ^-= 1 for simplicity. The margin parameters (m^+ , m^-) are tuned using grid search on 10% of the training data for each benchmark. For AdaCAM , we let κ_min and κ_max be the 5^th and 95^th percentiles of vMF concentrations for all classes in every epoch to reduce the impact of outliers, respectively. The other parameters remain the same as the non-adaptive CAM. We also use the same optimization algorithms including the learning rate as each base loss. During training, mini-batches are generated following <cit.> by randomly sampling 4 images per class. The calibration range is based on the FAR range for the end-user application, e.g., a low FAR range is more relevant for safety critical ones. This is similar to the choice of k in recall@k where a smaller k entails a higher requirement in precision. For consistency, we use the same calibration range of 1e-2≤FAR≤1e-1 in all three benchmarks. §.§ Ablation and Complexity Analysis Pareto Frontier for Accuracy and Calibration Consistency. In <ref> we visualize different dynamics between calibration consistency and accuracy in different accuracy regimes for models trained on iNaturalist with various losses, backbones and batch sizes. In the low-accuracy regime (right along the x-axis), the accuracy tends to improve concurrently with calibration consistency. This is aligned with the conventional belief that stronger discriminability can improve calibration consistency by encouraging stronger affinity of samples towards the class centroids. However, with increasing accuracy, a Pareto frontier <cit.> starts to form between recognition accuracy and calibration consistency in the high-accuracy regime (recall@1 approaching 100%), where accuracy improvement leads to degradation in calibration consistency. The same trade-off is observed in other benchmarks including CUB and Cars. While it might be counter-intuitive, this finding is not surprising: as calibration consistency measures the statistical uniformity in inter-class and intra-class embedding structures, it is implicitly identifying sources of bias which often comes at the cost of accuracy. Effect of CAM Margin Hyper-parameter. We ablate over the margin hyper-parameters m^+ and m^- in the CAM regularization. As shown in <ref>, adding CAM effectively improves the calibration consistency compared to the baseline Smooth-AP (SAP) loss across all combinations of margin hyper-parameters. For accuracy, it is observed that the negative margin m^- contributes more to the performance than the positive margin m^+. When it is too stringent, e.g., m^-=0.25, the accuracy drops below the baseline. We conjecture that an overly-tight requirement on the negative margin may overshadow the baseline loss as well as the positive term in CAM, leading to degraded accuracy. Comparison with Other Regularizations. In <ref> we show that CAM outperforms the other regularizers including the CENM loss <cit.> which is designed for improving calibration consistency in DML. We ascribe this improvement to CAM's effectiveness in encouraging uniformity in inter- and intra-class distances, as mentioned in <ref>. The other losses, however, tend to interfere with the base loss (L_SAP), resulting in lower retrieval accuracy. Note that although adding contrastive loss as the regularizer leads to the best calibration consistency, it also causes degradation in accuracy. However, our CAM regularization improves both accuracy and calibration consistency at the same time. Effect of CAM over different base DML losses. We add CAM regularization to a variety of SoTA DML losses including Smooth-AP <cit.> and Recall@k Surrogate <cit.>. As is shown in <ref> , adding CAM regularization consistently improves accuracy and calibration consistency at the same time across top-performing base losses. Effect of Different Architectures on CAM. In <ref>, we show that the accuracy and calibration consistency improvement induced by adding the CAM regularization is universal across different backbone architectures. In general, we find that there is more improvement in accuracy for ResNets models than for ViTs after adding CAM. CAM Time Complexity. In <ref>, we compare CAM to Recall@k Surrogate, the SoTA loss for image retrieval, to show that the slightly increased time complexity of CAM and its adaptive variant, AdaCAM, leads to a negligible increase (<3.6%) in the overall training time per epoch. §.§ CAM Large-Scale Image Retrieval Experiment The results for models trained with and without the CAM regularizer over large-scale benchmarks are summarized in <ref>. For the Recall@k Surrogate loss <cit.>, we use their official codebase on top of our CAM implementation. It is clear that our CAM loss is effective in improving calibration consistency (measured by OPIS and ϵ-OPIS), by up to 77.3%, compared to the different baseline losses considered. Meanwhile, adding CAM regularization is shown to consistently improve accuracy across almost all benchmarks, base losses and backbone architectures. Specifically, on iNaturalist, the largest image retrieval benchmark, adding our CAM regularization is shown to out-perform SoTA DML method L_RS@k, reducing the OPIS calibration inconsistency score from 0.37e-3 to 0.17e-3, while improving the recall@1 accuracy metrics from 84.0% to 84.8%. Adaptive CAM. <ref> gives the results for fine-tuning a CAM-regularized model (trained with L_base+L_CAM) with AdaCAM (L_base+L_AdaCAM). For ViT-B/16 architecture, introducing class-adaptiveness in the positive margin during the fine-tuning stage increases the overall recall@1 accuracy by a large margin from 84.8% to 85.1% for iNaturalist, 87.6% to 88.4% for CUB, and 87.7% to 89.7% for Cars. As fine-tuning with AdaCAM exploits the potential for accuracy improving by relaxing the positive margin class-adaptively, it tends to cause a minor degradation in OPIS compared to the CAM-regularized baseline, as shown in the table, although it is still significantly better than training without the CAM regularization (trained with L_base only). § CONCLUSION This work has formalized the notion of calibration inconsistency in DML. We developed an original metric, named Operating-Point-Incosistency-Score (OPIS), to quantify the calibration inconsistency across different test classes, which can be used orthogonally to existing accuracy metrics as an additional guard rail for model evaluation in DML. With OPIS, we found that the calibration inconsistency problem could not be fully resolved with higher model accuracy. To address this, we proposed a novel hinge-loss-based regularization, called Calibration-Aware Margin loss (CAM) which simultaneously enforces equality in intra-class compactness and inter-class separateness across different classes. With CAM, we demonstrated SoTA performance in both accuracy and calibration consistency on a variety of large-scale image retrieval benchmarks. Limitations. As with other inductive learning methods, CAM is subject to failure with a large distribution shift between the training set and the test set. Additionally, CAM is pair-based so applying it to million-scale class sizes such as face recognition remains an open question. false § CONCLUSION This work has formalized the calibration inconsistency problem in deep metric learning. We develop a novel metric to quantitatively measure calibration inconsistency in DML across different test classes, and find that the calibration inconsistency problem can not be resolved with higher model accuracy. To address this, we propose a novel hinge-loss-based regularization, called “Hard-sample Margin Constraint loss”, which enforces a global constraint on the L_2 distances between the hard positive and hard negative pairs. With CAM, we demonstrate SoTA results in both accuracy and calibration consistency on three large-scale image-retrieval benchmarks. We also devise a class-adaptive CAM regularizer based on the class-level representation compactness approximated by the vMF concentration to further boost accuracy. Discussion. As with other inductive learning methods, CAM is subject to failure with large distribution shift between train and test sets. Additionally, CAM is pair-based so how to apply it to million-scale class sizes remains an open question. One possibility is to modify CAM by constraining the L_2 distance between samples and near-by class prototypes instead. ieee_fullname
http://arxiv.org/abs/2307.07191v1
20230714065002
Benchmarks and Custom Package for Electrical Load Forecasting
[ "Zhixian Wang", "Qingsong Wen", "Chaoli Zhang", "Liang Sun", "Leandro Von Krannichfeldt", "Yi Wang" ]
cs.LG
[ "cs.LG", "stat.ML" ]
Adversarial Training Over Long-Tailed Distribution Guanlin Li Nanyang Technological University, S-Lab [email protected] Guowen Xu City University of Hong Kong [email protected] Tianwei Zhang Nanyang Technological University [email protected] August 12, 2023 ============================================================================================================================================================================================================================ Load forecasting is of great significance in the power industry as it can provide a reference for subsequent tasks such as power grid dispatch, thus bringing huge economic benefits. However, there are many differences between load forecasting and traditional time series forecasting. On the one hand, load forecasting aims to minimize the cost of subsequent tasks such as power grid dispatch, rather than simply pursuing prediction accuracy. On the other hand, the load is largely influenced by many external factors, such as temperature or calendar variables. In addition, the scale of predictions (such as building-level loads and aggregated-level loads) can also significantly impact the predicted results. In this paper, we provide a comprehensive load forecasting archive, which includes load domain-specific feature engineering to help forecasting models better model load data. In addition, different from the traditional loss function which only aims for accuracy, we also provide a method to customize the loss function based on the forecasting error, integrating it into our forecasting framework. Based on this, we conducted extensive experiments on load data at different levels, providing a reference for researchers to compare different load forecasting models. § INTRODUCTION Time series data are becoming ubiquitous in numerous real-world applications <cit.>. Among them, electrical load forecasting is crucial for maintaining the supply and demand balance in the power system. Thanks to the development of machine learning in recent years, various methods have been developed for load forecasting <cit.>. To further promote the development of this field, many power load forecasting competitions like the Global Energy Forecasting (GEF) Competition have been held for many years <cit.>. In addition, many competitions are targeting specific themes, like building energy management based on electricity demand and solar PV generation <cit.> and the impact of Covid-19 issues on the power systems <cit.>. Although many advanced time series forecasting methods have emerged in the past decades, the winners of load forecasting competitions often use relatively simple machine learning models. The secret to their victory lies in targeted feature engineering and adjustment of forecasting strategies. And this is the major difference between load forecasting and general time series forecasting <cit.>. To address this situation and provide a reference for future researchers in related fields, we have developed a package that is different from the existing time series packages <cit.>. Specifically, in our package, we split the entire power forecasting process into five modules: data preprocessing, feature engineering, forecasting methods, postprocessing, and evaluation metrics. Our package will cover both probabilistic forecasting and point forecasting, providing feature engineering methods and predictors based on traditional machine learning models and deep learning models. Users can combine any of these components and obtain their customized models. Furthermore, our package adds specific functionalities to address the characteristics of load forecasting and its differences from traditional time series forecasting, greatly enhancing the user's freedom to construct load forecasting models. Below, we will introduce the characteristics of our forecasting package. Compared with other time series, electrical load data will be greatly affected by external factors such as temperature and calendar variables, making it challenging to model the load dynamics accurately. Therefore, exploring the impact of external factors on load forecasting has always been an important research direction in this field <cit.>. And temperature is considered to have a significant impact on the power load. Many researchers have focused on how to use temperature variables to assist in constructing load forecasting models <cit.>. At present, the utilization of temperature variables can be roughly divided into two strategies. One is to make targeted transformations on temperature variables, which are often based on relatively simple statistical learning methods <cit.>. The other one is to exact features by neural networks. Such models usually achieve better accuracy <cit.>. However, the interpretability of this kind of model decreases due to the black-box characteristic of neural networks. Nevertheless, related feature engineering also has a guiding role for neural network-based forecasting models. Currently, no large-scale experimental results have been provided to demonstrate this. Therefore, we will provide various related feature engineering in our package and discuss the impact on load forecasting models based on temperature feature engineering. Apart from feature engineering, one more difference is that the most important concern of power load forecasting models is to get the lowest cost instead of the best accuracy of predictions. Due to the diversity of the time series, general time series forecasting results are rarely optimized for a specific task. However, the load forecasting results will mainly be used for subsequent power grid dispatch, which inspires us to pay attention to the relationship between the prediction and subsequent decision-making cost. <cit.> discovered the asymmetry between cost and forecasting error that the economic losses caused by predicting higher than the true value and predicting lower that are different. Therefore, bias will be introduced if we just use traditional gradient loss functions like the MSE and MAE to train the model. Then, <cit.> proposed to use the characteristics of piecewise linearization and the Huber function to model the relationship between forecasting error and real cost. Inspired by this work, our package provides methods for modeling the relationship between forecasting error and other variables and then constructing the corresponding loss function. At the same time, we also design an asymmetric loss function and get better results than the traditional MSE loss function on multiple datasets. Lastly, we conduct extensive experiments to evaluate the point forecasting and probabilistic forecasting performance of different models on multiple load series at different levels. Furthermore, we demonstrate how the feature engineering and the loss function we provide could help the model achieve better forecasting results. We summarize our primary contributions as follows: nolistsep * The first large benchmark for electrical load forecasting. This benchmark brings important reference significance for the day-ahead dispatching of the power grid. Based on the demand for grid day-ahead dispatching, we have adopted a slightly different forecasting setting from most existing time series settings. Instead of requiring continuous historical data inputs, the corresponding hours of multiple days are considered as time series for forecasting. Sufficient time shall be reserved for the day-ahead dispatching of the power grid by this setting. * Domain-specific feature engineering and self-defined loss function. Based on the characteristics of load, temperature, and calendar variables, we integrate the feature engineering that reflects the ternary relationship into our package for users to use in any forecasting model. At the same time, we also provide users with a method to customize the loss function. Users can define the relationship between the forecasting error and any variable (such as the dispatching cost of the power grid) and integrate it into our forecasting framework as a loss function. * Fully open-source platform with accessibility and extensibility. We release the relevant code on GitHub[<https://github.com/Leo-VK/ProEnFo>]. Users can design their load forecasting framework by freely combining the components we provide to cope with different power load forecasting scenarios. At the same time, we also provide a variety of evaluation and visualization methods to facilitate users to evaluate the predictive performance of different models from multiple perspectives. § DATA DESCRIPTION In this section, we will introduce how our dataset is collected and the characteristics of the dataset. We have collected a total of 11 datasets as our data collection, and a detailed description of each dataset is provided in the Appendix(Supplementary materials). In summary, the data we collect mainly comes from UCI machine learning databases <cit.>, Kaggle data competition platforms <cit.>, and the famous global energy forecasting competitions <cit.>. In addition, we also put a dataset reflecting the impact of the COVID-19 epidemic on the power system into our archives. Under the influence of COVID-19, which is an influential external factor, the power load has changed significantly, posing a challenge to the robustness of the forecasting model <cit.>. From the perspective of load hierarchy, 7 of the data we collect are aggregated-level datasets, and the remaining 4 are building-level datasets. Aggregated-level load refers to the total load that aggregates multiple independent loads (such as the power demand of various electrical appliances, equipment, or buildings in the power system) together. Because the aggregated-level load results from multiple load aggregations, it typically exhibits more pronounced periodicity and seasonality. For this reason, calendar variables significantly impact load forecasting at this level. The opposite is the load of the building level, which can also be seen as a part of the aggregated load. Building-level loads change very dramatically, resulting in significant uncertainty. Therefore, many works related to building-level load forecasting often focus on probabilistic forecasting <cit.>. To provide a reference for researchers in related fields, we also collect building-level datasets from the Building Data Genome 2 (BDG2) Data-Set <cit.>. In addition to different levels, the data we collect also has a characteristic that almost all cover meteorological data such as temperature, which may be greatly beneficial to forecasting because of the great impact of external variables (especially temperature) on load. The number of time series contained in each dataset and their corresponding features are listed in Table <ref>. And all the data will be released under appropriate licenses. § PACKAGE FUNCTIONS §.§ Overview of the package Fig <ref> shows the overview of our packages. As stated before, we divide the overall forecasting process into several parts to address potential issues in load forecasting the power. First of all, load data is obtained by physical devices such as electricity meters. During this process, it is inevitable to encounter missing values, omissions, and other situations. Such a situation is more common in the load data of building-level <cit.>. In this regard, our package provides various methods such as ARIMA based on Kalman filtering <cit.>, K-nearest neighbor algorithm <cit.> to fill in missing values, ensuring minimum data information distortion. Secondly, our model provides a variety of feature selection strategies to meet the needs of different scenarios. For example, users can choose the corresponding data from the previous seven days for day-ahead forecasting or use Autocorrelation Function(ACF) and Partial Autocorrelation Function(PACF) metrics to help select the lagged values. In addition, our framework allows users to add external variables such as temperature and calendar variables that may impact the forecasting model. As for the forecasting methods, we provide both probabilistic forecasting and point forecasting methods. Among them, probabilistic forecasting will be based on quantile forecasting. However, quantile regression may lead to confusion about quantile, that is, the forecasting result of a larger quantile is smaller than that of a smaller quantile. For this situation, we have provided corresponding post-processing for reordering. After we get forecasting results, we need reasonable metrics to evaluate them. Existing forecasting packages generally provide a variety of metrics, such as Pinball Loss and CRPS. Although they can evaluate the quality of forecasting results, they reduce the discrimination of forecasting models. For example, a model may perform poorly in a certain quantile while performing well in other quantiles. To more intuitively compare the performance of models in different quantiles, our package provides the matrix visualization function of multiple metrics in different quantiles. And the evaluation metrics we have implemented include CalibrationError <cit.>, WinklerScore <cit.>, CoverageError, and so on. §.§ Feature engineering strategy The impact of temperature on load is greatly influenced by calendar variables. Inspired by the HongTao linear regression model <cit.>, we apply one-hot encoding to calendar variables and then model this coupling relationship by taking their products with temperature to the first, second, and third powers as features. Because of the one-hot coding, one categorical variable is changed into multiple binary categorical variables. When the corresponding variable is 0, the parameters of the linear model will not have any effect on it. Therefore, the result of doing so is constructing multiple personalized models based on calendar variables. Such a feature engineering strategy can help the forecasting model cope with situations where the temperature and load relationship shifts under different calendar variables. To preserve such characteristics and integrate existing sequence modeling methods (such as LSTM, and N-BEATS), we treat the information extracted by sequence modeling methods as trend variables and concatenate them with the previously obtained calendar temperature coupling variables. Finally, a fully connected layer is used to map the final output result. In section 5, we will compare the impact of this feature engineering on forecasting results across multiple datasets. §.§ Custom loss function Based on <cit.>, our package provides corresponding piecewise linearization functions to help users model the relationship between forecasting errors and real requirements (such as scheduling costs) and integrate it into the gradient descent training. Specifically, we need data pairs (ϵ_i,C_i)_i=1,…,N, where ϵ_i is the forecasting error and C_i is the real requirement. Here, we consider using Forecasting Error Percentage(FEP) ϵ_i=f(x_i)-y_i/y_i as our error metric. At the same time, we normalize {C}_i=1,…,N, making its value fall between 0 and 1. Now our goal has become how to construct L(ϵ) to estimate C. To achieve high fitting accuracy, we can use a spline cubic function, denoted as s, to fit it. However, the disadvantage of doing so is that there will be many discontinuities, which is not conducive to integrating them into our forecasting framework as a loss function. To ensure the fitting ability of the function while making it as simple as possible, a piecewise linearization strategy is adopted here. Among them, the number of segments K can be determined by setting the upper bound of the fitting error <cit.>, s-L(ϵ)_2 ≤(∫_ϵ_min^ϵ_max s^''(ϵ)^2/5 d ϵ)^5/2/√(120) K^2. As for the position of the corresponding interval points, we strive to evenly distribute the data points within the interval formed by each of two endpoints <cit.>. So far, we have obtained a piecewise linearization function. To take it as a loss function, we need to ensure its differentiability. Specifically, we use a quadratic function in a cell around each breakpoint to smooth it. Note that the quadratic function does not need to fit the data, but only needs to ensure its left and right continuity and the continuity of the corresponding first derivative to obtain the parameters of itself. § BENCHMARKING PROCESS Fig <ref> shows the benchmarking pipeline for day-ahead power load forecasting. In recent years, probabilistic forecasting has been considered a more reliable forecasting method because it can output not only predicted values but also corresponding prediction intervals, providing more information for decision-makers in subsequent tasks for reference. Therefore, we will mainly discuss the results of probabilistic forecasting. At the same time, to explain our proposed custom loss function, we will also compare the point forecasting performance of the forecasting model trained on gradient descent. Data Preprocessing & Day-ahead forecasting. We first use the functions provided by our package to fill in missing values and solve the problem of zero padding. For forecasting scenarios, we chose the most common day-ahead load forecasting, which is to forecast 24 hours in advance, as our main task for evaluation (our package also supports the construction of other forecasting scenarios). To meet the needs of subsequent power grid scheduling, load forecasting needs to reserve sufficient time for subsequent tasks, which means that there is a certain gap between the available historical sequences and the forecasting range. Therefore, we adopt the widely used forecasting setting in the power industry, which uses the historical values of the previous 7 days at the same time to forecast the corresponding power load on the 8th day. To our knowledge, we are the first to construct forecasting benchmarks on a large scale under this forecasting setting. Feature Engineering. As mentioned in Section 3.2, we apply the transformation of feature engineering based on temperature and calendar variables to our forecasting models. For sequence models like the LSTM, we concatenate the features with the output of the models and input them into a single-layer ANN. As for the non-sequence models, we just concatenate all the features and input lagged values. As a comparison, we also conduct experiments on non-transformed features simultaneously, that is, directly inputting calendar variables and temperature as features. Forecasting Models & Loss functions. For comparison, we introduce 16 probabilistic forecasting methods covering multiple types. It includes 2 simple moving quantile methods (based on global historical data and fixed length window) and 2 models according to forecasting error (based on the Persistence and linear regression methods). In addition, there are 5 non-deep learning methods, and they are quantile regression methods based on the K-nearest neighbor algorithm <cit.>, quantile regression methods based on random forest and sample random forest <cit.>, and quantile regression methods based on extreme random tree and sample extreme random tree <cit.>. Finally, we introduce 7 deep learning methods. Firstly, there are simple forward propagation networks <cit.>, LSTM networks <cit.> for sequence modeling, convolutional neural networks <cit.> (where we use one-dimensional convolutional kernels), and Transformer <cit.> networks applying attention mechanisms. Secondly, we have methods that modify the above neural network structures to make them more suitable for time series forecasting, such as LSTNet <cit.>, which is designed to simultaneously capture both long-term and short-term patterns of time series, Wavenet based on causal convolution <cit.>, and N-BEATS stacked into blocks using multiple linear layers <cit.>. Among them, the neural network is trained based on gradient descent. For probabilistic forecasting, we take the sum of ninety-nine quantile losses from 0.01 to 0.99 as the loss function. For point forecasting, we give an asymmetric differentiable loss function through data fitting and integrate it into our forecasting framework as a loss function. At the same time, we also construct neural networks based on the traditional MSE Loss function for comparison. Evaluation. To evaluate the models, We select and implement several metrics. Fig <ref> provides a visualization example based on the GEF17 CT dataset. The abscissa represents a different quantile while the ordinate represents the corresponding evaluation metric. These metrics evaluate different forecasting models by considering the accuracy of the prediction intervals, whether the forecasting model can accurately capture the climb and descent processes of the sequence, and so on. A detailed analysis will be presented in our Appendix. § BENCHMARK EVALUATION §.§ Quantile-based probabilistic forecasting We conduct extensive experiments on the collected load dataset based on the 16 probabilistic forecasting methods mentioned above. In datasets with temperature information, we randomly select a representative from each building characteristic in the Hog and Bull datasets and select the top five load series in the GEF14 dataset. The forecasting results of these load series, along with forecasts from all other load series in datasets with temperature information, will be used for display. And other datasets together with the forecasting results of datasets UCI and ELF will be shown in the Appendix. Table <ref> reports parts of the evaluation results of the experiment by PinballLoss, where _T means that we transform the calendar variables and temperature features while the original model means that we did not perform one-hot coding on calendar variables. Fig <ref> reports the proportion of forecasting methods improved by the temperature transformation. Here the NP represents the proportion with improvement in PinballLoss for non-deep learning methods while NP represents the proportion without improvement. DP and DNP are similar but on deep learning models. From the perspective of forecasting models, non-deep learning methods perform better than deep learning methods without the temperature transformation strategy. In deep learning methods, simple ANN, LSTM, and CNN methods usually perform better than the rather complicated ones. Moreover, these complex deep learning models like the Wavenet and N-BEATS may even encounter underfitting situations. With the temperature transformation strategy, non-deep learning methods are not improved most of the time(as shown in Table <ref> and Fig<ref>). However, deep learning methods have great improvements with temperature transformation. Among them, for the Covid19 dataset, adding this feature engineering significantly reduced the forecasting results. The characteristic of this data is that after the impact of COVID-19, the load of the power system has changed significantly, and there is a large deviation between the training set and the test set. Therefore, the decrease in forecasting performance indicates that after this feature engineering, the model tends to learn more about the relationship between temperature and load, while ignoring the influence of historical load to a certain extent. This conclusion can also be seen in some building-level datasets. When there is a significant offset in the building level dataset, which is not caused by temperature, such feature engineering may also lead to a decrease in the forecasting performance of the model. On the contrary, in datasets such as GEF12, 14, and 17, it can be seen that for relatively stable aggregated level loads, such feature engineering can significantly improve the performance of the forecasting model. §.§ Comparison of asymmetric fitting loss function We use the polynomial function to simulate relevant data points (see Appendix for details) and use the function given by our package to fit an asymmetric loss function. Among them, because the results in the previous section show that temperature-based feature engineering has a significant improvement effect on the deep learning network, we use this feature engineering on all of the methods. Fig <ref> shows the proportion of forecasting methods improved by the asymmetric loss function. P represents the proportion with improvement in MAPE while NP represents the proportion without improvement. From the perspective of datasets, the asymmetric loss function has an obvious improvement effect on most datasets. It can not help more than 50% of forecasting models improve forecasting accuracy solely on the GEF12 dataset. This result indicates that the MSE function is not suitable for describing the error between the predicted values and the actual load in the vast majority of cases. Furthermore, the MSE loss function comes from the assumption of Gaussian distribution, which assumes that the forecasting error follows a Gaussian distribution with a certain variance. Our experiments show that such assumptions are unreasonable in many cases, at least the true error distribution may not be symmetrical. Although it is difficult to provide a true error distribution, we can construct various loss functions to approximate the true error distribution through data fitting. Through our package, users can easily achieve this. § CONCLUSIONS In this paper, we introduce our package for constructing an electrical forecasting framework. We split the entire power forecasting process into several modules for users to freely combine and construct their own forecasting frameworks. In addition, our package also provides the engineering implementation of features based on temperature and the construction method of custom loss function by data fitting. The experimental results indicate that these are all helpful for load forecasting. What's more, the construction method of the loss function can also be used to model the relationship between requirements in real tasks and forecasting errors. Based on the package, we build the first large-scale benchmark by conducting extensive experiments on multiple levels of power load datasets, comparing various models, which provides a reference for researchers in this field. ieeetr 10 wen2022robust Q. Wen, L. Yang, T. Zhou, and L. Sun, “Robust time series analysis and applications: An industrial perspective,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'22), pp. 4836–4837, 2022. lai2021revisiting K.-H. Lai, D. Zha, J. Xu, Y. Zhao, G. Wang, and X. Hu, “Revisiting time series outlier detection: Definitions and benchmarks,” in Thirty-fifth conference on neural information processing systems (NeurIPS) datasets and benchmarks track (round 1), 2021. zhou2022film T. Zhou, Z. Ma, Q. Wen, L. Sun, T. Yao, W. Yin, and R. Jin, “FiLM: Frequency improved legendre memory model for long-term time series forecasting,” Advances in Neural Information Processing Systems (NeurIPS), vol. 35, pp. 12677–12690. wang2018review Y. Wang, Q. Chen, T. Hong, and C. Kang, “Review of smart meter data analytics: Applications, methodologies, and challenges,” IEEE Transactions on Smart Grid, vol. 10, no. 3, pp. 3125–3148, 2018. yildiz2017review B. Yildiz, J. I. Bilbao, and A. B. Sproul, “A review and analysis of regression and machine learning models on commercial building electricity load forecasting,” Renewable and Sustainable Energy Reviews, vol. 73, pp. 1104–1122, 2017. zhang2021review L. Zhang, J. Wen, Y. Li, J. Chen, Y. Ye, Y. Fu, and W. Livingood, “A review of machine learning in building load prediction,” Applied Energy, vol. 285, p. 116452, 2021. hong2014global T. Hong, P. Pinson, and S. Fan, “Global energy forecasting competition 2012,” 2014. hong2016probabilistic T. Hong, P. Pinson, S. Fan, H. Zareipour, A. Troccoli, and R. J. Hyndman, “Probabilistic energy forecasting: Global energy forecasting competition 2014 and beyond,” 2016. hong2019global T. Hong, J. Xie, and J. Black, “Global energy forecasting competition 2017: Hierarchical probabilistic load forecasting,” International Journal of Forecasting, vol. 35, no. 4, pp. 1389–1399, 2019. CityLearn “NeurIPS 2022 The CityLearn Challenge: Using AI for building's energy management.” <https://www.aicrowd.com/challenges/neurips-2022-citylearn-challenge>, 2022. farrokhabadi2022day M. Farrokhabadi, J. Browell, Y. Wang, S. Makonin, W. Su, and H. Zareipour, “Day-ahead electricity demand forecasting competition: Post-covid paradigm,” IEEE Open Access Journal of Power and Energy, vol. 9, pp. 185–191, 2022. sobhani2020temperature M. Sobhani, T. Hong, and C. Martin, “Temperature anomaly detection for electric load forecasting,” International Journal of Forecasting, vol. 36, no. 2, pp. 324–333, 2020. alexandrov2020gluonts A. Alexandrov, K. Benidis, M. Bohlke-Schneider, V. Flunkert, J. Gasthaus, T. Januschowski, D. C. Maddix, S. Rangapuram, D. Salinas, J. Schulz, et al., “Gluonts: Probabilistic and neural time series modeling in python,” The Journal of Machine Learning Research, vol. 21, no. 1, pp. 4629–4634, 2020. godahewa2021monash R. Godahewa, C. Bergmeir, G. I. Webb, R. J. Hyndman, and P. Montero-Manso, “Monash time series forecasting archive,” in Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track (Round 2), 2021. aprillia2020statistical H. Aprillia, H.-T. Yang, and C.-M. Huang, “Statistical load forecasting using optimal quantile regression random forest and risk assessment index,” IEEE Transactions on Smart Grid, vol. 12, no. 2, pp. 1467–1480, 2020. haben2019short S. Haben, G. Giasemidis, F. Ziel, and S. Arora, “Short term load forecasting and the effect of temperature at the low voltage level,” International Journal of Forecasting, vol. 35, no. 4, pp. 1469–1484, 2019. liu2023sadi H. Liu, Z. Ma, L. Yang, T. Zhou, R. Xia, Y. Wang, Q. Wen, and L. Sun, “Sadi: A self-adaptive decomposed interpretable framework for electric load forecasting under extreme events,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5, 2023. guan2021feature Y. Guan, D. Li, S. Xue, and Y. Xi, “Feature-fusion-kernel-based gaussian process model for probabilistic long-term load forecasting,” Neurocomputing, vol. 426, pp. 174–184, 2021. farfar2019two K. E. Farfar and M. T. Khadir, “A two-stage short-term load forecasting approach using temperature daily profiles estimation,” Neural Computing and Applications, vol. 31, pp. 3909–3919, 2019. imani2021electrical M. Imani, “Electrical load-temperature cnn for residential load forecasting,” Energy, vol. 227, p. 120480, 2021. hafeez2020electric G. Hafeez, K. S. Alimgeer, and I. Khan, “Electric load forecasting based on deep learning and optimized by heuristic algorithm in smart grid,” Applied Energy, vol. 269, p. 114915, 2020. wang2017improving Y. Wang and L. Wu, “Improving economic values of day-ahead load forecasts to real-time power system operations,” IET Generation, Transmission & Distribution, vol. 11, no. 17, pp. 4238–4247, 2017. zhang2022cost J. Zhang, Y. Wang, and G. Hug, “Cost-oriented load forecasting,” Electric Power Systems Research, vol. 205, p. 107723, 2022. dua2017 D. Dua and C. Graff, “Uci machine learning repository.” <http://archive.ics.uci.edu/ml>, 2017. Jhana2019 J. Nicholas, “Hourly energy demand generation and weather.”<https://www.kaggle.com/datasets/nicholasjhana/energy-consumption-generation-prices-and-weather>, 2019. Kaggle. Yeafi2021 A. Yeafi, “Pdb electric power load history.” <https://www.kaggle.com/datasets/ashfakyeafi/pbd-load-history>, 2021. Kaggle. Shahanei2021 S. Shahane, “Electricity load forecasting.” <https://www.kaggle.com/datasets/saurabhshahane/electricity-load-forecasting>, 2021. Kaggle. xu2019probabilistic L. Xu, S. Wang, and R. Tang, “Probabilistic load forecasting for buildings considering weather forecasting uncertainty and uncertain peak load,” Applied energy, vol. 237, pp. 180–195, 2019. jeong2021short D. Jeong, C. Park, and Y. M. Ko, “Short-term electric load forecasting for buildings using logistic mixture vector autoregressive model with curve registration,” Applied Energy, vol. 282, p. 116249, 2021. Miller2020-yc C. Miller, A. Kathirgamanathan, B. Picchetti, P. Arjunan, J. Y. Park, Z. Nagy, P. Raftery, B. W. Hobson, Z. Shi, and F. Meggers, “The building data genome project 2, energy meter data from the ASHRAE great energy predictor III competition,” Scientific Data, vol. 7, p. 368, Oct. 2020. jeong2021missing D. Jeong, C. Park, and Y. M. Ko, “Missing data imputation using mixture factor analysis for building electric load data,” Applied Energy, vol. 304, p. 117655, 2021. harvey1984estimating A. C. Harvey and R. G. Pierse, “Estimating missing observations in economic time series,” Journal of the American statistical Association, vol. 79, no. 385, pp. 125–131, 1984. garcia2010pattern P. J. García-Laencina, J.-L. Sancho-Gómez, and A. R. Figueiras-Vidal, “Pattern classification with missing data: a review,” Neural Computing and Applications, vol. 19, pp. 263–282, 2010. chung2021beyond Y. Chung, W. Neiswanger, I. Char, and J. Schneider, “Beyond pinball loss: Quantile methods for calibrated uncertainty quantification,” Advances in Neural Information Processing Systems, vol. 34, pp. 10971–10984, 2021. barnett1973introduction V. Barnett, “An introduction to bayesian inference and decision,” 1973. berjon2015optimal D. Berjón, G. Gallego, C. Cuevas, F. Morán, and N. García, “Optimal piecewise linear function approximation for gpu-based applications,” IEEE transactions on cybernetics, vol. 46, no. 11, pp. 2584–2595, 2015. de1978practical C. De Boor and C. De Boor, A practical guide to splines, vol. 27. springer-verlag New York, 1978. hastie2009elements T. Hastie, R. Tibshirani, J. H. Friedman, and J. H. Friedman, The elements of statistical learning: data mining, inference, and prediction, vol. 2. Springer, 2009. meinshausen2006quantile N. Meinshausen and G. Ridgeway, “Quantile regression forests.,” Journal of machine learning research, vol. 7, no. 6, 2006. geurts2006extremely P. Geurts, D. Ernst, and L. Wehenkel, “Extremely randomized trees,” Machine learning, vol. 63, pp. 3–42, 2006. jain1996artificial A. K. Jain, J. Mao, and K. M. Mohiuddin, “Artificial neural networks: A tutorial,” Computer, vol. 29, no. 3, pp. 31–44, 1996. hochreiter1997long S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. li2021survey Z. Li, F. Liu, W. Yang, S. Peng, and J. Zhou, “A survey of convolutional neural networks: analysis, applications, and prospects,” IEEE transactions on neural networks and learning systems, 2021. vaswani2017attention A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017. lai2018modeling G. Lai, W.-C. Chang, Y. Yang, and H. Liu, “Modeling long-and short-term temporal patterns with deep neural networks,” in The 41st international ACM SIGIR conference on research & development in information retrieval, pp. 95–104, 2018. oord2016wavenet A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “Wavenet: A generative model for raw audio,” arXiv preprint arXiv:1609.03499, 2016. oreshkin2019n B. N. Oreshkin, D. Carpov, N. Chapados, and Y. Bengio, “N-beats: Neural basis expansion analysis for interpretable time series forecasting,” in International Conference on Learning Representations (ICLR), 2020.
http://arxiv.org/abs/2307.04866v1
20230710192545
Automated Detection of Gait Events and Travel Distance Using Waist-worn Accelerometers Across a Typical Range of Walking and Running Speeds
[ "Albara Ah Ramli", "Xin Liu", "Kelly Berndt", "Chen-Nee Chuah", "Erica Goude", "Lynea B. Kaethler", "Amanda Lopez", "Alina Nicorici", "Corey Owens", "David Rodriguez", "Jane Wang", "Daniel Aranki", "Craig M. McDonald", "Erik K. Henricson" ]
eess.SP
[ "eess.SP", "cs.AI", "cs.LG" ]
1 .001 Automated Detection of Gait Events and Travel Distance Albara Ah Ramli et al. mode = title]Automated Detection of Gait Events and Travel Distance Using Waist-worn Accelerometers Across a Typical Range of Walking and Running Speeds 1]Albara Ah Ramli Conceptualization, Methodology, Software, Formal analysis, Writing, Supervision, Validation, Visualization, Investigation, Data Curation 1]Xin Liu Writing - Review and Editing, Methodology, Supervision 2]Kelly Berndt Investigation, Data Curation, Writing - Review and Editing 3]Chen-Nee Chuah Writing - Review and Editing, Methodology, Supervision 2]Erica Goude Investigation, Supervision, Writing - Review and Editing 2]Lynea B. Kaethler Investigation, Data Curation, Writing - Review and Editing 2]Amanda Lopez Investigation, Data Curation, Writing - Review and Editing 2]Alina Nicorici Investigation, Data Curation, Methodology 4]Corey Owens Investigation, Data Curation 2]David Rodriguez Investigation, Data Curation, Writing - Review and Editing 2]Jane Wang Investigation, Data Curation, Writing - Review and Editing 5]Daniel Aranki Conceptualization, Methodology, Software, Analysis 2]Craig M. McDonald Conceptualization, Resources, Funding acquisition 2]Erik K. Henricson[type=editor,auid=000,bioid=1,prefix=,role=,orcid=0000-0002-4617-225X] Conceptualization, Methodology, Software, Formal analysis, Writing, Supervision, Funding acquisition, Investigation [1] [email protected] [1]organization=Department of Computer Science, School of Engineering; University of California,vaddressline=1 Shields Ave, city=Davis,postcode=95616, state=CA,country=USA [2]organization=Department of Physical Medicine and Rehabilitation, School of Medicine; University of California,addressline=1 Shields Ave, city=Davis, postcode=CA 95616, state=CA,country=USA [3]organization=Department of Electrical and Computer Engineering, School of Engineering; University of California,vaddressline=1 Shields Ave, city=Davis,postcode=95616, state=CA,country=USA [4]organization=UC Davis Center for Health and Technology, School of Medicine; University of California Davis,addressline=1 Shields Ave, city=Davis, postcode=CA 95616, state=CA,country=USA [5]organization=Berkeley School of Information; University of California Berkeley,addressline=1 Shields Ave, city=Berkeley, postcode=CA 94720, state=CA,country=USA [cor1]Corresponding author Background: Estimation of temporospatial clinical features of gait (CFs), such as step count and length, step duration, step frequency, gait speed and distance traveled is an important component of community-based mobility evaluation using wearable accelerometers. However, challenges arising from device complexity and availability, cost and analytical methodology have limited widespread application of such tools. Research Question: Can accelerometer data from commercially-available smartphones be used to extract gait CFs across a broad range of attainable gait velocities in children with Duchenne muscular dystrophy (DMD) and typically developing controls (TDs) using machine learning (ML)-based methods Methods: Fifteen children with DMD and 15 TDs underwent supervised clinical testing across a range of gait speeds using 10 or 25m run/walk (10MRW, 25MRW), 100m run/walk (100MRW), 6-minute walk (6MWT) and free-walk (FW) evaluations while wearing a mobile phone-based accelerometer at the waist near the body’s center of mass. Gait CFs were extracted from the accelerometer data using a multi-step machine learning-based process and results were compared to ground-truth observation data. Results: Model predictions vs. observed values for step counts, distance traveled, and step length showed a strong correlation (Pearson’s r = -0.9929 to 0.9986, p<0.0001). The estimates demonstrated a mean (SD) percentage error of 1.49% (7.04%) for step counts, 1.18% (9.91%) for distance traveled, and 0.37% (7.52%) for step length compared to ground truth observations for the combined 6MWT, 100MRW, and FW tasks. Significance: The study findings indicate that a single accelerometer placed near the body’s center of mass can accurately measure CFs across different gait speeds in both TD and DMD peers, suggesting that there is potential for accurately measuring CFs in the community with consumer-level smartphones. * Extracting CFs using a single accelerometer at varying speeds in DMD and TD peers. * ML-based method to estimate CFs such as steps, distance, duration, length, and speed. * Compare the estimated CFs with the ground truth observations and pedometer. * Suggests that CFs can be measured in the community without using GRF. Temporospatial gait clinical featuresDuchenne muscular dystrophyTypically-developingAccelerometer Machin Learning Gait cycle [ [ August 12, 2023 =================== § INTRODUCTION Accelerometers can be more accurate than pedometers at slower walking speeds and in populations with atypical gait patterns, making pedometers less suitable for evaluating physical activity in such populations <cit.>. Estimating temporospatial clinical features (CFs) of gait (step length, step duration, step frequency, and gait speed) is a fundamental step in gait analysis, and detecting the initial contact (IC) of the heel is crucial for identifying gait events and the beginning of the step cycle. In a laboratory environment, detecting events and estimating CFs is typically done by measuring ground reaction forces (GRF) and verifying with visual observation. However, using these methods to measure gait events in the community is often impractical. Studies have described the potential of using acceleration signals to estimate CFs. Several studies have demonstrated that step length, gait speed, initial contact (IC), and incline can be determined from acceleration signals of the lower trunk <cit.>. Aminian and colleagues explored feasibility of using a fully connected artificial neural network (ANN) with accelerometers on the trunk and heel to predict incline and speed based on ten statistical parameters extracted from the raw signal <cit.>. Results revealed that a negative peak in the heel accelerometer signal indicates IC events in each gait cycle (two steps). Studies comparing accelerometer signals from different body positions at various walking speeds demonstrate that positions near the body’s center of mass (trunk, waist, pelvis, and sacrum) are suitable for capturing gait events <cit.>. In a study by Zijlstra et al., participants walked on a force-transducing treadmill and overground while trunk acceleration data was recorded to estimate step lengths and walking speed. Initial contact (IC) events were matched with vertical ground reaction force (GRF) normalized by body weight to anteroposterior acceleration. The start and end of gait cycles from the GRF corresponded with the time of the peak amplitude value in the anteroposterior acceleration signal <cit.>. Further research by Lee et al. and Mo et al. demonstrated that IC events can be determined from anteroposterior acceleration measured at the pelvis and sacrum <cit.>. They collected accelerometer signals from the pelvis/sacrum and GRF data, and matched IC events on anteroposterior acceleration with vertical GRF. Initial contact events on the force plate corresponded with the instant of the positive peak pelvis/sacrum anteroposterior acceleration<cit.>. We present a machine learning (ML)-based method that automates detection of initial contact (IC) events and clinical features of gait (CFs) using raw accelerometer signals obtained from consumer mobile devices <cit.>. We demonstrate that using a single accelerometer worn close to the body's center of mass is an accurate and reliable approach to estimate CFs and IC events across a typical range of walking speeds. This method can be applied to healthy individuals and those with gait disturbances without the need for ground reaction force (GRF) measurements. § MATERIALS AND METHODS Estimating distance using accelerometer signals is challenging due to inherent quadratic error of accelerometers, which can result in deteriorating estimates even with short integration times and distances. Many methods attempt to estimate distance from accelerometers by integrating acceleration twice with respect to time, despite incorporating error-limiting mechanisms and setting restrictions, which can result in errors due to noise, drift, and bias <cit.>. We propose an ML-based signal processing method that accurately estimates an individual's distance traveled, step length, and number of steps across varying walking/running speeds, outperforming the built-in pedometer function on iPhones, which show the highest error percentage in slow walking speeds <cit.>. Because different individuals have different walking/running behaviors that affect acceleration, we built a regression model for each individual to estimate distance based on their specific walking/running patterns. We developed a regression model using data from five different speeds (SC-L1 to SC-L5) to map step length to the corresponding anteroposterior acceleration amplitudes using pairs of distance and acceleration values (Figure <ref>A). We calculated distance for a single speed by averaging the step distances, while the acceleration was calculated by averaging the maximum values of acceleration in each step (Figure <ref>B). To ensure a fair comparison, we evaluated three sources of estimated data: first, ground-truth data based on video observation of distance traveled and number of steps; second, the pedometer sensor in the iPhone, which provided estimates of distance and number of steps; and third, our Walk4Me system <cit.>, which includes calibration regression models for estimating distance and a signal processing algorithm for measuring number of steps. We estimated the speed, step length, and frequency as derivatives from the regression and signal processing. §.§ Participants Fifteen children with Duchenne muscular dystrophy (DMD) and fifteen typically developing (TD) peers participated in gait speed experiments. The age of the participants ranged from 3 to 16 years, with a mean age of 8.6 years and a standard deviation of 3.5. Their body weight ranged from 17.2 to 101 kg, with a mean weight of 36 kg and a standard deviation of 18.8. Their height ranged from 101.6 to 165.5 cm, with a mean height of 129 cm and a standard deviation of 15.8. All participants had at least 6 months of walking experience and were able to perform a 10-meter walk/jog/run test in less than 10 seconds. Participants with DMD had a confirmed clinical diagnosis and were either naïve to glucocorticoid therapy or on a stable regimen for at least three months. Northstar Ambulatory Assessment (NSAA) scores for DMD participants ranged from 34 to 8, indicating typical levels of function to clinically-apparent moderate mobility limitation (Table-<ref>). The protocol was reviewed and approved by the Institutional Review Board (IRB) at the University of California, Davis, and informed consent was obtained from each participant prior to the initiation of study procedures. Measurements were taken at eight different walking/running gait activities, including speed-calibration tests at slow walk to running speeds (SC-L1, SC-L2, SC-L3, SC-L4, and SC-L5), a 6-minute walk test (6MWT), a 100-meter fast-walk/jog/run (100MRW), and a free walk (FW). §.§ Equipment Acceleration data from each participant were sampled at a rate of 100 Hz using an iPhone 11 and our Walk4Me smartphone application  <cit.>. The phones were securely attached at the waist with an athletic-style elastic belt enclosure, positioned approximately at the level of the lumbosacral junction. The raw accelerometer signal was synchronized with video recordings captured by a GoPro camera at a rate of 30 Hz. An observer marked the events where a participant passed the start or end of the duration or distance assigned to each activity using the web portal of the Walk4Me system. §.§ Gait and Events Detection and Data Analysis We collected the raw accelerometer signal from 30 participants, which included the x, y, and z axes (vertical, mediolateral, and anteroposterior), along with the corresponding timestamps. Based on the findings of Zijlstra <cit.>, we observed that the initial contact (IC) events were more distinguishable in the anteroposterior axis (z-axis) compared to the other axes. Therefore, we used the anteroposterior signal from the raw accelerometer data to develop our method for counting the number of steps, estimating step length, and calculating the total distance individuals walked at different speeds. §.§.§ Method of Step Detection Figure <ref>A presents a raw accelerometer signal of the anteroposterior movement (z-axis) from a typically developing (TD) participant during fast walk speed calibration (SC-L4) for 3.9 seconds. The steps in the anteroposterior signal are characterized by long wavelengths (low frequency), while other wavelengths (high frequency) represent noise signals. To extract the steps, we applied a low-pass filter to the signal to smooth the signal and remove short-term fluctuations while preserving the longer-term trend (Figure <ref>A and Figure <ref>B). We then identified the peak values of the filtered signal, as the peaks occur only once per step in the filtered signal (Figure <ref>C). The number of peaks corresponds to the number of steps taken by the participant. Figure <ref>A shows the estimated number of steps using our method as blue dots, compared to the ground truth represented by a black line. The built-in pedometer steps estimation is shown in red. §.§.§ Method of IC Detection To detect the IC events, we find the midpoint between two peaks in the filtered signal (Figure <ref>D), which corresponds to the toe-off (TO) events during the gait cycle based on observation. We then identify all the peaks that occur within each step duration in the original acceleration signal (Figure <ref>E). Next, we determine the maximum peak value (anteroposterior G's), which corresponds to the time point of each IC (Figure <ref>F). §.§.§ Method of Step Length Estimation Using Regression We create an individualized regression model for each participant to associate average peak acceleration values with step lengths. Figure <ref>A depicts the data flow of our model training and prediction process. Each model is trained using five different participant-selected calibration speeds (SC-L1 to SC-L5). For each speed, we calculate the average acceleration peak values by taking the mean of all the peaks as described in Section <ref>. To calculate the average step length for training, we divide the observed ground-truth distance by the number of steps obtained from Section <ref>. This process is repeated for each of the five calibration speeds (e.g., point SC-L4 in Figure <ref>A). The resulting individualized equation through all five points allows us to input the peak acceleration value of any step within the participant's range of ambulatory velocity to estimate that step's length (shown as the green line in Figure <ref>A). §.§.§ Estimating the Distance After establishing the individualized model, it can be used on unseen data. We calculate the step lengths of all identified steps from a previously unseen event and accumulate them to calculate the total distance traveled by the individual. In this project, we used 100MRW, 6MWT, and FW as input signals during the inference stage, as shown in Figure <ref>B, and compared the calculated distances with the ground-truth observed distances and the device's internal pedometer. §.§.§ Calculating the Average Step Length During the inference stage, to calculate the average step length of an individual, we divide the distance estimated from Section <ref> by the number of steps obtained from Section <ref>. Figure <ref>C shows the estimated average step length using our ML model as blue dots, compared to the ground-truth average step length represented by a black line. The red dots represent the average step length estimated by the built-in pedometer. §.§.§ Gait Pattern Representation After determining the midpoint boundaries between steps, we generate a composite map of each step normalized to the gait cycle percentage, allowing for visual examination of AI-determined steps for irregularities or comparison of averaged accelerometer patterns between individuals (Figure <ref>). The gait cycle is identified using peak detection at the IC event, marking the beginning and end of each step. The average acceleration patterns are also calculated from all gait cycles across all activities and at various speeds. The forward movement (x-axis) is normalized to a time scale of 0 to 100%. By comparing the gait cycles of two participants (TD and DMD peers) at various speeds, distinct different patterns of acceleration magnitude emerge (Figure <ref>), highlighting differences in gait patterns between the two participants. After determining the boundaries between steps, we generate a composite map of each step normalized to the gait cycle percentage, allowing for visual examination of AI-determined steps for irregularities or comparison of averaged accelerometer patterns between individuals (Figure <ref>). The gait cycle is identified using peak detection at the IC event, marking the beginning and end of each step. The average acceleration patterns are also calculated from all gait cycles across all activities and at various speeds. The forward movement (x-axis) is normalized to a time scale of 0 to 100%. Using this method, we can identify the IC of every single step and estimate the step duration (Figure <ref>F) without the need to use GRF <cit.>. By comparing the gait cycles of two participants (TD and DMD peers) at various speeds, distinctly different patterns of acceleration magnitude emerge (Figure <ref>), highlighting differences in the gait between the two participants. §.§.§ Error Percentage Rates To compare observed ground-truth step counts, distance traveled, and average step lengths with our model's estimates and the pedometer estimates native to the mobile devices, we employed two methods. First, we calculated the aggregated error for all estimates by determining an error percentage rate (Error_rate) using equation <ref>. Error_rate = |∑_i^n |V_c-V_o|_i - ∑_i^n |V_o|_i/∑_i^n |V_o|_i| × 100 The Error_rate is calculated by aggregating the residual values of all participants (i) for all activities. The residual is the difference between the proposed methods (V_c) and the total number of ground truth observations (V_o). Then, the total aggregated is subtracted from the total ground truth and divided by the total ground truth. Table-<ref> compares the error percentage rate of step count, distance, and average step length between our Walk4Me system and iPhone pedometer measurements. Second, to evaluate the percentage error for each individual measurement and estimate pair, we subtracted the model estimate from the observed ground truth measure and divided it by the ground truth measure multiplied by 100 for each event. We computed mean (SD) percentage error for step count, distance traveled, and step length parameters for calibration events SC-L1 to SC-L5 combined, and separately for 6MWT, 100MRW, and FW efforts combined, as well as for all efforts combined. We compared the mean percentage error values between control participants and those with DMD using simple t-tests for each contrast. § RESULTS In this study, we assessed the accuracy of step counts during walking, jogging, and running using our Walk4Me system compared to the iPhone pedometer. We validated our results by comparing both systems with ground-truth data. Our findings, as shown in Table-<ref>, indicate that the Walk4Me system had an average step count error rate of 3.46%, demonstrating reliable performance in accurately tracking steps at different speeds. The combined error rates from participants with Duchenne muscular dystrophy (DMD) and typically developing (TD) participants ranged from 1.26% during slow walk pace (SC-L2) to 7.26% during the fast 100m run. In contrast, the iPhone's built-in pedometer showed an average error rate of 48.46% during short- to moderate-distance tasks at varying gait velocities. The iPhone pedometer had the lowest error rate of 36.35% during the longer-duration fast walk 6MWT task, and the highest error rate of 85.26% during the short-duration jogging/running task SC-L5. For distance measurement, our Walk4Me system showed an average error rate of 5.83%, with the lowest error rate of 3.9% during the fast walk SC-L4 pace, and the highest error rate of 7.74% during the fast 100m run. The iPhone's built-in pedometer had an average error rate of 42.23%, with task-specific error ranging from 27.42% during the 6MWT to 82.54% during SC-L5 jogging/running task. For step length measurement, our Walk4Me system showed an average error rate of 5.80%, with the lowest error rate of 3.68% at a comfortable walking pace (SC-L3), and the highest error rate of 8.64% during the short-term jog/run SC-L5 task. The iPhone's built-in pedometer demonstrated an average error rate of 46.40%, which varied from 30.76% during SC-L5 to 76.10% during SC-L1. In contrast to overall aggregate accuracy, the mean (SD) accuracy of model predictions for individual events compared to ground truth observations for step counts, distance traveled, and step lengths is presented in Table-<ref> and depicted in Figure <ref>A, Figure <ref>B, and Figure <ref>C. Predicted and observed values for all three parameters showed a strong correlation (Pearson's r = -0.9929 to 0.9986, p<0.0001). The estimates demonstrated a mean (SD) percentage error of 1.49% (7.04%) for step counts, 1.18% (9.91%) for distance traveled, and 0.37% (7.52%) for step length compared to ground truth observations for the combined 6MWT, 100MRW, and FW tasks. There were no statistically significant differences in mean error percentages between control participants and those with DMD (data not shown). § DISCUSSION The use of travel distance and step length as gait metrics is essential for clinical gait assessment in the community setting. However, accurately measuring step length traditionally requires a clinical facility or gait lab with a trained observer present during the assessment session. Clinical assessment methods are considered the most detailed and ideal, but their availability may be limited due to factors such as facility availability, staff availability, difficulties with patient travel to assessment locations, or public health restrictions such as those related to COVID-19. Additionally, clinical observation methods can be susceptible to human error, such as observer fatigue or distraction, as well as instrument errors, failed video recordings, or obstructed views, which can limit the utility of the collected data. An alternative option to overcome these limitations and facilitate more frequent and convenient collection of gait data in the community setting is to use off-the-shelf technologies such as pedometers, which are commonly built into smartphones and widely used in sports. However, it is crucial to assess the reliability of these devices, particularly when used for clinical purposes. Therefore, we conducted experiments to clinically validate the reliability of using a pedometer and compared the results with those obtained by observers. We propose an ML-based signal processing method using our Walk4Me system, which can estimate step counts, distance traveled, and step lengths with increased levels of accuracy. The advantage of our method is that it requires less observed interaction, only necessitating a short duration of time for five speed-calibration tests. Our system can automatically estimate distance and step length without the need for human interaction. Some of the source code and a demo of this paper can be found at <https://albara.ramli.net/research/ic> along with some additional results. § CONCLUSION This study introduces a novel signal processing and machine learning technique estimates that accurately identifies steps and step length based on the individual's gait style. Our findings demonstrate that using a single accelerometer worn near the body's center of mass can be more accurate than a standard pedometer. Our method can be applied to both healthy individuals and those with muscle disorders without the need for ground reaction force (GRF) measurements. To our knowledge, this is the first study to propose a method that extracts CFs from raw accelerometer data across the attainable range of gait speeds in healthy participants and those with muscle disease. On average, our method of counting steps and estimating stride length and distance traveled performs well when applied to longer structured sub-maximal clinical testing efforts and free-roaming self-selected pace travel. In these settings, our methods surpass the pedometer functions native to the mobile devices we use. This will allow us to extend basic elements of gait analysis to community settings using commonly available consumer-level devices. § DATA AVAILABILITY The authors commit to providing data access in compliance with Gait and Pose journal, grant sponsor, and University of California guidelines. Requests for data access can be addressed to the corresponding author. § FUNDING ACKNOWLEDGEMENT This study was partially funded for participant assessment and data collection by a grant from the U.S. Department of Defense (W81XWH-17-1-0477), a pilot grant from the University of California Center for Information Technology Research in the Interest of Society (CITRIS) and the Banatao Institute, and by a research grant from the Muscular Dystrophy Association. § ACKNOWLEDGMENTS We would like to thank the students of the UC Davis EEC193A/B Winter 2020 Senior Design Projects Team (Nikki Esguerra, Ivan Hernandez, Zehao Li, and Jingxuan Shi) for their work piloting proof-of-concept methods for clinical feature extraction. § DECLARATION OF INTEREST The authors declare that they have no competing interests and no conflicts to declare. model1-num-names Author biography without author photo. Author biography. Author biography. Author biography. figs/pic1 Author biography with author photo. Author biography. Author biography. Author biography. figs/pic1 Author biography with author photo. Author biography. Author biography. Author biography.
http://arxiv.org/abs/2307.04120v1
20230709082441
Toward a stellar population catalog in the Kilo Degree Survey: the impact of stellar recipes on stellar masses and star formation rates
[ "Linghua Xie", "Nicola R. Napolitano", "Xiaotong Guo", "Crescenzo Tortora", "Haicheng Feng", "Antonios Katsianis", "Rui Li", "Sirui Wu", "Mario Radovich", "Leslie K. Hunt", "Yang Wang", "Lin Tang", "Baitian Tang", "Zhiqi Huang" ]
astro-ph.GA
[ "astro-ph.GA" ]
subject Article SPECIAL TOPIC: 2023 04 xx x xxxx 000000 xxx xxx 1,2]Linghua Xie 1,2]Nicola R. [email protected] 3]Xiaotong [email protected] 4]Crescenzo Tortora 5]Haicheng Feng 1] Antonios Katsianis 6,7]Rui Li 1,2]Sirui Wu 8]Mario Radovich 9]Leslie K. Hunt 2,10]Yang Wang 2,11] Lin Tang 1]Baitian Tang 1,2]Zhiqi Huang Xie L. Xie L. et al. [1]School of Physics and Astronomy, Sun Yat-sen University, Zhuhai Campus, 2 Daxue Road, Xiangzhou District, Zhuhai, P. R. China; [2]CSST Science Center for Guangdong-Hong Kong-Macau Great Bay Area, Zhuhai, China, 519082 [3]Institute of Astronomy and Astrophysics, Anqing Normal University, Anqing, Anhui 246133, China [4]INAF – Osservatorio Astronomico di Capodimonte, Salita Moiariello 16, 80131 - Napoli, Italy; [5]Yunnan Observatories, Chinese Academy of Sciences, Kunming, 650011, Yunnan, People's Republic of China [6]School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China; [7]National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012, China [8]INAF - Osservatorio Astronomico di Padova, via dell'Osservatorio 5, 35122 Padova, Italy [9]INAF - Osservatorio Astronomico di Arcetri, Largo Enrico Fermi 5, 50125, Firenze, Italy [10]Peng Cheng Laboratory, No.2, Xingke 1st Street, Shenzhen, 518000, P. R. China [11]School of Physics and Astronomy, China West Normal University, ShiDa Road 1, 637002, Nanchong, China The Kilo Degree Survey (KiDS) is currently the only sky survey providing optical (ugri) plus near-infrared (NIR, ZYHJK_S) seeing matched photometry over an area larger than 1000 deg^2. This is obtained by incorporating the NIR data from the VISTA Kilo Degree Infrared Galaxy (VIKING) survey, covering the same KiDS footprint. As such, the KiDS multi-wavelength photometry represents a unique dataset to test the ability of stellar population models to return robust photometric stellar mass (M_*) and star-formation rate (SFR) estimates. Here we use a spectroscopic sample of galaxies for which we possess u g r i Z Y J H K_s “gaussianized” magnitudes from KiDS data release 4. We fit the spectral energy distribution from the 9-band photometry using: 1) three different popular libraries of stellar population templates, 2) single burst, simple and delayed exponential star-formation history models, and 3) a wide range of priors on age and metallicity. As template fitting codes we use two popular softwares: LePhare and CIGALE. We investigate the variance of the stellar masses and the star-formation rates from the different combinations of templates, star formation recipes and codes to assess the stability of these estimates and define some “robust” median quantities to be included in the upcoming KiDS data releases. As a science validation test, we derive the mass function, the star formation rate function, and the SFR-M_* relation for a low-redshift (z<0.5) sample of galaxies, that result in excellent agreement with previous literature data. The final catalog, containing ∼290 000 galaxies with redshift 0.01<z<0.9, is made publicly available. 98.62.Lv,98.62.Ai,98.62.Ck Toward a stellar population catalog in the Kilo Degree Survey: the impact of stellar recipes on stellar masses and star formation rates [ August 12, 2023 ======================================================================================================================================= § INTRODUCTION The spectral energy distribution (SED) of galaxies provides crucial information on the properties of their stellar populations at the different cosmic epochs. In particular, the stellar mass content and the star formation history of galaxies are of major importance to understand the mechanisms of their formation, including the impact of the environment on their properties <cit.>. For instance, the study of the stellar mass function as a function of the redshift is a crucial probe of the stellar mass assembly of galaxies <cit.>, and combined with the halo mass function of simulations, can be used as a cosmological probe, e.g. in abundance matching studies (e.g. <cit.>, <cit.>, <cit.>). Similarly, the star formation rate function can measure the growth of the stellar content of galaxies across the cosmic time (e.g. <cit.>). A relevant example of scaling relation is the star formation versus stellar mass, also known as the galaxy main sequence (<cit.>). This is crucial to understand the formation mechanisms of galaxies, in particular the relation between the star formation activity across time (<cit.>), and the gas consumption during galaxy formation (<cit.>). The measurement of the galaxy stellar masses and star formation rates mainly relies on details of stellar population analyses (<cit.>, <cit.>), and their ability to constrain the stellar mass-to-light ratios (e.g. <cit.>) and specific star formation history (e.g. <cit.>). This is a notoriously complex problem (<cit.>), due to the existence of degeneracies among some of the parameters, in particular dust, age and metallicity (e.g. <cit.>, <cit.>, <cit.>, <cit.>). Furthermore, in order to convert the stellar population parameters into “galaxy” properties, one needs to account for the galaxy intrinsic luminosity, which carries other uncertainties, e.g. galaxy distances, or redshifts. This step is generally incorporated in the stellar population codes that can model the SED using the redshift as a free parameter (e.g. <cit.>, <cit.>, <cit.>, <cit.>) or as an input from spectra or photo-z codes (e.g. <cit.>, <cit.>, <cit.>). Despite these difficulties, spectroscopical data (<cit.>, <cit.>, <cit.>, <cit.>) or multi-band photometry (e.g., <cit.>, <cit.>) have been routinely used to derive stellar masses, age, metallicity using simple stellar population (SSP, e.g. <cit.>) or more complex stellar population models with a parametrized star formation history (SFH, e.g. delayed exponential: <cit.>, log-normal: <cit.>, double power law: <cit.>, Γ: <cit.>) or non-parametric SFHs <cit.>. Optical broadband photometry alone cannot break the dust-age–metallicity degeneracies (e.g. <cit.>), while extending the wavelength range in the near-infrared (NIR) can provide additional constraints that can alleviate them (<cit.>, <cit.>). The combination of optical and NIR photometry is also effective for photometric redshifts from SED fitting techniques, which are an important ingredient in stellar population analyses. These consist of finding a model galaxy spectrum, given by a linear combination of representative stellar or galaxy templates, which best fits the observed galaxy SED (<cit.>). Here, the wide baseline can alleviate the degeneracy between various galaxy spectra as a function of galaxy redshifts (<cit.>). In this paper, we want to test the outcomes of different stellar population codes, namely LePhare (<cit.>) and CIGALE <cit.>, and different stellar population templates and star formation histories, using a multi-band, seeing matched catalog of galaxies collected in the fourth data release (DR4) of the Kilo Degree Survey (KiDS, <cit.>, K+19 hereafter). The catalog includes sources for which we possess 1) optical photometry in ugri bands and NIR photometry in ZYHJK_s bands from the VISTA Kilo Degree Infrared Galaxy (VIKING, <cit.>), 2) spectroscopic redshifts (spec-zs, hereafter) from different surveys, and 3) deep learning photometric redshifts. It collects about 290 000 sources, a subsample of which has already been used in KiDS to calibrate photometric redshifts (e.g., <cit.>). The advantage of spectroscopic redshifts is that they alleviate the degeneracies between colors and redshifts, which further impact the accuracy of the stellar parameters. The addition of photometric redshifts will also allow us to assess the impact of their larger uncertainties on the same stellar parameters. In fact, the final goal of this work is to evaluate the variance of the stellar population quantities from different SED fitting recipes, popular stellar population templates, as well as the uncertainties on redshifts. We will determine what are the most stable parameters and define robust quantities suitable for science applications. This is a first step to define a strategy to produce a robust stellar population catalog for the upcoming KiDS data release 5 (KiDS-DR5, Wright et al. 2023). The main parameters we are interested in are the stellar mass and the star formation rate, but we will also provide the catalog of ages and metallicities of the galaxy stellar populations from a large set of priors. Since for this spectroscopic sample we also possess very accurate morphotometric redshifts from deep learning (i.e. GaZNet, <cit.>), we can finally test the impact of redshifts derived from pure multi-band photometric catalogs combining optical and NIR, like the ones expected to be collected from future large sky surveys like Euclid mission (<cit.>), Vera Rubin Legacy Survey in Space and Time (VR/LSST; <cit.>), China Space Station Telescope (CSST; <cit.>). There have been previous works including stellar population analyses of KiDS galaxy catalogs, either determining stellar mass only, for weak lensing studies (<cit.>) or estimating galaxy properties, including photometric redshifts and stellar masses, for bright galaxies (i.e. r<21, <cit.>), or estimating structural parameters and stellar mass to select ultra-compact and massive galaxies (<cit.>) and for central dark matter studies (<cit.>). However, none of these has investigated the impact on the stellar masses of the combination of fitting procedure and stellar templates. A similar analysis has been provided for the CANDELS survey (<cit.>), where they used optical plus NIR photometry and tested the impact on stellar masses of different stellar population codes, stellar templates and star formation histories. As a science validation test, we will conclude our analysis by using stellar mass and star formation rate estimates to derive the stellar mass function, the star formation rate function, and the mass vs. star formation rate relation of the galaxies from the KiDS spectroscopic sample, using both spectroscopic and deep learning redshifts and compare them with literature data at redshift z<1. The paper is organised as follow. In Sect. <ref> we introduce the data and the set-up of the stellar population analysis; in Sect. <ref> we present the stellar population inferences, assess their accuracy and precision using a series of statistical estimators, and define a robust definition of the stellar mass and star formation estimates; in Sect. <ref> we discuss the dependence of the accuracy and scatter on galaxy properties and finally show the galaxy mass function, the star formation rate function, and the stellar mass-star formation rate relation as a science validation test; in Sect. <ref> we draw some conclusions and perspectives for future analyses. Throughout the paper, we will adopt the following cosmological parameters: Ω_m = 0.3, Ω_Λ = 0.7, H_0 = 70 km s^-1 Mpc^-1. § DATA AND METHODS The spectroscopic sample which we use in this paper consists of 9-band photometry from the 1000 deg^2 area of KiDS data release 4 (KiDS-DR4 hereafter, see K+19), plus spectroscopic redshifts collected from the Galaxy Mass Assembly <cit.> survey, and the Sloan Digital Sky Survey/Baryon Oscillation Spectroscopic Survey <cit.>, overlapping with the KiDS footprint. We also add further machine learning redshifts from the GaZNet convolutional network presented in <cit.>, as these have been demonstrated to provide very accurate redshifts up to z∼ 3, for galaxy samples with magnitude r 22.5. In the following we describe in more details the content of the dataset and the different stellar population model set-ups used to analyze them. §.§ Photometry and spectroscopic redshifts The photometric data of the spectroscopic sample are collected from the KiDS and the VIKING surveys. These are two sister surveys covering a total area of 1350 deg^2 of the sky, in ugri and ZYJHK_s bands, respectively. The KiDS survey has been carried out at the VST/Omegacam telescope in Cerro Paranal (<cit.>; <cit.>). It has been optimized for weak lensing in the r-band, which provides best seeing imaging (average FWHM∼0.7”), and mean limiting AB magnitude (5σ in a 2” aperture) of 25.02±0.13. The other bands have been observed with poorer seeing and reached mean limiting AB magnitudes of 24.23±0.12, 25.12±0.14, 23.68±0.27 for u, g and i, respectively (see K+19). VIKING has been carried out at the VISTA/VIRCAM (<cit.>) and complemented KiDS observations with five NIR bands (Z, Y, J, H and Ks). The median value of the seeing is ∼ 0.9” (<cit.>), and the AB magnitude depths are 23.1, 22.3, 22.1, 21.5 and 21.2 in the five bands (<cit.>), respectively. The 9-band fluxes have been measured via the Gaussian Aperture and PSF (GAaP) photometry method (<cit.>), which gives colours that are corrected for PSF differences. Hence, GAaP photometry naturally provides seeing matched fluxes for each source in the catalog, by definition. However, sources more extended than the aperture function result in underestimated total fluxes. In order to correct this systematic effect, a total aperture correction needs to be applied to derive the “total” galaxy properties (see Sect. <ref>). As discussed in K+19 the GAaP photometry is Galactic extinction corrected using the <cit.> maps with the <cit.> coefficients. As a spectroscopic database, we have collected redshifts from: 1) GAMA data release 4 (<cit.>), and 2) SDSS data release 17 (<cit.>, SDSS-DR17 hereafter). Previous compilations of spectroscopic data overlapping with the KiDS area did not include SDSS-DR17, but included other high redshift datasets (see e.g. <cit.> and reference therein). However, the statistics of galaxies matching the KiDS-DR4 catalog at redshift larger than z∼1 is rather sparse. On the other hand, for the analysis we are interested to perform in this paper, SDSS-DR17 and GAMA provide a quite abundant sample of galaxies at z1. In particular, GAMA is the most complete sample, reaching ∼95.5% completeness for r-band magnitude r<19.8 (<cit.>). To match the redshift distributions of the two catalogs, we exclude sources at z>0.9, where the overall catalog drops to a constant number of a few tens of galaxies per redshift bin, mainly from SDSS-DR17. We also notice that a large portion of sources at z<0.005 are classified as “stars” from their parent surveys. Hence, to avoid the contamination from other misclassified stars, we decide to use a conservative cut and select only sources with z>0.01. Equally, we exclude all sources classified as Quasars (QSO), as their SED might be dominated by the nuclei emission, rather than the stellar population light. These criteria together produce a final catalog of 242678 GAMA and 77859 SDSS-DR17 galaxies, which includes 31728 repeated sources. For these duplicates, we adopt the SDSS-DR17 redshifts, which have errors, finally ending up with a total of 288 809 objects. In the following, we consider these sources to be “galaxies”, although we might still expect some minor contamination from unclassified QSO (or active galactic nuclei, AGN). The distributions of the redshift and the r-band Kron-like magnitude, MAG_AUTO (r-mag for short), obtained by SExtractor <cit.> for these galaxies are finally reported in Fig. <ref>, where we have broken the sample in the two original spectroscopic surveys, for clarity. From the r-mag distribution we can see the different completeness magnitude of the two samples, with SDSS-DR17 showing a peak at r∼17.8. and GAMA at r∼19.8. The sample (in)completeness is not expected to impact the main goal of our analysis, which is to study the response of the 9-band optical+NIR photometry to the different stellar population recipes, however we will need to consider this when the stellar parameters will be used for the science validation test (see Sect. <ref>). §.§ Statistical estimators Here we introduce some statistical estimators we will use throughout the paper: 1) the relative bias, 2) the median absolute error and 3) the outlier fraction. 1) The relative bias is defined as Δ p = p_i-r_i, where p_i and r_i are the estimated (log) parameters and the reference value for any i galaxy of the sample. In the case of redshifts, this becomes μ=p_i-z_i/1+z_i, where p_i are the predicted photometric redshifts and z_i are the spectroscopic redshifts (see <cit.>). 2) The Normalized median absolute deviation (NMAD) is then defined as: NMAD = 1.4826 × median (|BIAS - median (BIAS)|). where we identify by BIAS either the Δ p or the μ defined above. This gives a measure of the overall scatter of the predicted values with respect to the 1-to-1 relation, i.e. the precision of the method.   3) Fraction of outliers. It is often useful to define the fraction of catastrophic estimates, that significantly deviate from the mean values, as a measure of the robustness of an estimator. In case of redshifts this is defined as the fraction of discrepant estimates, with the condition |μ|>0.15 (see, e.g., <cit.>). For the stellar population parameters we decided to use a 2σ level in the log-normal distribution of the estimated values, which allow us to spot strong deviations from gaussianity. §.§ Deep Learning morphoto-metric redshifts from GaZNet As mentioned in Sect. <ref>, in this paper we want to test the robustness of the derived quantities from a full photometric samples. To do that, besides the spec-z as in Sect. <ref>, we use the morphoto-metric redshifts obtained by combining KiDS r-band images and the 9-band catalog using the Galaxy morphoto-Z Network (GaZNet, <cit.>, Li+22 hereafter). GaZNet has been previously tested on a KiDS galaxy sample (see Li+22 for details) and demonstrated to achieve very high precision in normalized median absolute deviation (NMAD=0.014 for z1 redshift and NMAD=0.041 for z1 redshift galaxies) and low outlier fraction (0.4% for lower and 1.27% for higher redshift galaxies, respectively), down to r∼22. These performances are better than the ones obtained by standard bayesian methods in KiDS for “point” estimates (e.g. BPZ, see <cit.>) and other machine learning methods based on photometry only data applied previously to KiDS datasets (e.g. <cit.>, <cit.>). The level of accuracy reached by the deep learning estimates is shown in Fig. <ref>, where we compare the GaZNet estimated redshifts vs. the spec-z catalog described above. In this figure we show the GaZNet estimates also for the SDSS-DR17 sample, that was not part of the deep learning training/testing in Li+22. As such, the SDSS-DR17 sample, added in this paper, represents a totally independent galaxy test sample with rather different distribution in redshift and luminosity than the original training sample (see Fig. <ref>). This gives us a more realistic sense of the scatter we can expect from the full photometric samples from KiDS, covering similar redshift/magnitude ranges. For the predictions in Fig. <ref>, we obtain a relative bias μ=0.005, a NMAD=0.017 and an outlier fraction of 0.4%, which are perfectly in line with the results found on <cit.>, hence confirming the very good performances of the deep learning morphoto-z provided by the GaZNet. We just notice a tail of outliers at z0.05, which are overestimated by the GaZNet and that might yet produce some systematics in the stellar population parameters. §.§ LePhare stellar population: set-up and templates LePhare (<cit.>), is a template-fitting code, which performs a simple χ^2 minimization between the stellar population synthesis (SPS) theoretical models and data, in a standard cosmology (see <ref>). In our analysis we adopt a <cit.> Initial Mass Function[In LePhare, there is no real option to set the IMF, but this is implemented in the stellar libraries. For the <cit.> libraries the IMF closer to Chabrier is the <cit.> IMF. To account for these IMF difference we will simply adopt the standard -0.05 dex correction to transform Kroupa-based into Chabrier-based masses. ] (IMF), the <cit.> dust-extinction law. We also include the contribution of nebular emission, e.g. from low-mass starforming galaxies (see Sect. <ref>): LePhare uses a simple recipe based on the Kennicutt relations <cit.> between the SFR and UV luminosity, Hα and [OII] lines. Regarding the stellar templates, we test three different libraries: 1) the standard <cit.>, 2) the <cit.> and 3) the <cit.> stellar population synthesis (SPS) models. We have also adopted three different models for the star formation history (SFH), ψ(t): 1) a single burst (SB, hereafter), i.e. ψ(t)=δ(t_0), where t_0 is the age of the galaxy, 2) the exponentially declining law (ExD, hereafter), ψ(t)∝ exp(-t/τ), and finally 3) a combination of both (SB+ExD), which is directly allowed by, e.g., the M05 stellar libraries. We remark here that the choice of the exponential declining SFH is due to the limited choice offered by Le Phare, even though the ExD is flexible enough to embrace a variety of realistic SFHs. CIGALE (see below) will give us the chance to make a different choice, although a more general approach with a larger variety of SFHs will be considered in future analyses. The full LePhare set-up is summarized in Table <ref>. As anticipated in Sect. <ref>, we use the redshift, both spec-z and morphoto-z, as input in LePhare. The stellar population parameters we use to perform the best fit to the GAaP 9-band magnitudes, described in Sect. <ref>, are: age, metallicity, and star formation parameters (either δ(t_0) or τ), which are assumed to vary as in Table <ref>. Consistently with previous literature (e.g. <cit.>, <cit.>), we use the best-fit parameters as a reference for this analysis. §.§ CIGALE stellar population: set-up and templates We also adopt the Code Investigating GALaxy Emission (CIGALE, <cit.>, v2020.0), which can construct the FUV to the radio SEDs of galaxies and provide star formation rate, attenuation, dust luminosity, stellar mass, and many other physical quantities, using composite stellar populations from simple stellar populations combined with highly flexible star formation histories. For our analysis, we make use of BC03 and M05 stellar templates. Differently from LePhare, CIGALE does not have a pure ExD law among the SFH choices, hence we decide to adopt a delayed exponential law (DelEx, hereafter), ψ(t)∝ t/τ^2 exp(-t/τ), which is smoother than the exponential declining SFH from LePhare. Consistently with LePhare, we have adopted a <cit.> Initial Mass Function (IMF), <cit.> dust-extinction law and both the inclusion or not of nebular continuum and emission lines for the BC03 only. In CIGALE the nebular templates adopted are based on <cit.>. The full set-up parameters, including the range of the stellar parameters adopted, are summarized in Table <ref>. As for LePhare, we use the best-fit parameters from CIGALE in the following analysis. § RESULTS In this section, we discuss the outcome of the different models summarized in Table <ref>. These have, in some cases, very strong differences in the recipe of the star formation history (SFH), as we have adopted a single burst and both an exponentially declining and delayed exponential SFR, with a wide range of τ (see Sects. <ref> and <ref>, and Table <ref>). This choice is made to explore the impact of different SFHs on the stellar masses and SFR estimates. The SFH models above have have been effectively used to reproduce the properties of local galaxies <cit.> and cosmic SFR density and stellar mass density at redshifts z < 2 <cit.>. As anticipated, we also include the effect of emission lines that, although they are generally important in massive galaxies at high redshift (e.g.<cit.>, <cit.>, but see also <cit.>), can also be relevant for local low-mass starforming galaxies (e.g. <cit.>). Overall, the model combinations in Table <ref> include a fair variety of libraries and SFHs, which we expect to provide realistic evidences of systematic effects. Moreover, as we are preparing the methods to be applied to the full KiDS photometric dataset, we will perform the same analysis using morphoto-zs as input, which will be provided to deeper limiting magnitudes that the ones offered by the spectroscopic “galaxy” sample (e.g. down to r∼22.5 as seen in Li+22). This will allow us to evaluate the existence or not of systematics on stellar population parameters, and the impact on the precision of the estimates, due to the usage of the more scattered photometric redshifts. Once collected all the estimates from all the configurations in Table <ref>, we will 1) check the overall consistency among the different stellar parameters; 2) discuss the scatter of the parameters and possibly define some robust estimator for them. As mentioned in the Sect. <ref>, in this first paper we concentrate on the stellar masses and the star formation rates, as the most physical meaningful parameters one can extract from large multi-band photometric samples of galaxies, to study their evolution across the cosmic time. We use the estimates from BC03 templates and ExD star formation recipe in LePhare (LP/BC03/ExD in Table <ref>) as reference model for mass and star formation estimates, if not otherwise specified. This is for uniformity with previous analyses in KiDS (e.g. <cit.>). To statistically assess the difference among the stellar mass and the SFR estimates among the different configurations, we will use the following estimators: 1) the relative bias, 2) the median absolute error and 3) the outlier fraction, defined in Sect. <ref>. §.§ Stellar masses In this section we show the results for the stellar masses for the case we fix the redshift of the galaxies of the sample to the spectroscopic and morphoto-metric redshifts, introduced in Sect. <ref> and shown in Fig. <ref>. By stellar masses, we aim at determining the total mass in stars, while we have seen in Sect. <ref> that the seeing matched GAaP photometry adopted in KiDS does not correspond to a “total aperture”. Hence, if using these fractional fluxes, the stellar masses calculated by the stellar population codes are the mass of stars required to produce the inputted galaxy SED, resulting in an aperture bias. Therefore, in order to recover a fair estimate of the total galaxy stellar mass, the observed SED must be representative of the total light emitted from the galaxy. In order to correct this systematic effect, we opt to use the quasi-total SExtractor, MAG_AUTO, using the equation: M_ *, corr= M_*,out+0.4×(GAAP_r- MAG_AUTO) where M_*,out is the stellar mass estimated by the stellar population tools, GAAP_r is the r-band GAaP magnitude from the KiDS catalog, and the M_ *, corr is the corrected “total” mass, under the assumption of constant mass-to-light ratios. In the following we will first show the results of the stellar population analysis using the spectroscopic redshift, then we compare these latters with the results of the morphoto-z to estimate the impact of the larger uncertainties on these latter on determining galaxy distances (see Sect. <ref>). Finally, we discuss the impact of the inclusion of the nebular emissions in the models. §.§.§ Using Spectroscopic redshifts We start showing the results obtained using the spectroscopic redshift as fixed parameter in the stellar population tools. In Fig. <ref>, we compare the stellar mass estimates from LePhare and CIGALE, using different libraries and SFHs and spectroscopic redshifts. All other parameters, in Table <ref>, are kept varying in the model grid to be estimated via the SED fitting procedure. The range of masses is quite large and spans over almost 6 order of magnitudes from log M_*/M_⊙∼6 to log M_*/M_⊙∼12, although stellar masses log M_*/M_⊙7-7.5 are compatible with globular cluster sized systems rather than galaxies. We cannot exclude the contamination from such compact stellar systems, but we decide to retain all sources in the catalog without making any mass based selection. Nonetheless, we will keep this cautionary note on the very low mass end in mind throughout the paper. Overall, the stellar masses all align along the 1-to-1 relation with residuals (bottom panels), defined as Δlog M=log M_y-log M_x, computed in different mass bins, that are generally distributed around zero, but with the LP models systematically smaller and the CI models rather aligned to the reference model, LP/BC03/ExD. All residuals, except LP/M05/SB, are consistent with zero within 1σ scatter, defined as the standard deviation of the Δlog M, σ(Δlog M), at least for masses larger than log M/M_⊙∼9. In the same bottom panels, we report the mean scatter for the mass bins at log M/M_⊙<9 and >9, showing generally a slightly larger values at lower masses (mean 0.22 dex) than larger masses (mean 0.20 dex), with the CI models also showing a systematically smaller σ(Δlog M) than LP ones. The bias, NMAD and outlier fraction of each configuration are summarized in Table <ref>. Similarly to Fig. <ref>, the bias is indeed consistent with zero for all configurations within the NMAD, except for LP/M05/SB for which the bias is statistically significant. CIGALE shows both a negligible bias and small NMAD, whether or not the same stellar libraries of the reference model from LePhare (BC03) are adopted, meaning that the code and the SFH can have an impact on the scatter but not on the accuracy of the stellar mass inferences. On the other hand, the large bias found for LP/M05/SB shows that the combination of template and SFH has a large impact on the bias, for a fixed fitting method. If we also fix the template (see e.g. M05), we can see that the bias can have rather large variations (from -0.423 of LP/M05/SB, to -0.178 of LP/M05/ExD), eventually due to the impact of the different SFH choices that exacerbate the difference on the treatment of thermally pulsating asymptotic giant branch (TP-AGB) phase by M05 (see e.g. <cit.>). Moreover, we notice a double sequence, at stellar masses log M_*/M_⊙10.8 in the models including the exponential SFH, separating star-forming from quiescent galaxies. The same sequence is not evident on the SB model, which tends to assign younger ages and lower mass-to-light ratios to star-forming galaxies and ultimately ending into an overall strong underestimate of the stellar masses (see the negative biases in Table <ref>). The CIGALE model using M05 and a delayed exponential (CI/M05/DelEx) shows a tighter distribution, with no sign of the double sequence. This confirms that the M05 models are more sensitive than others to the SFH, although there might be a residual component from the fitting (code) procedure, having CI models ∼30% smaller scatter than the LP ones, on average. The NMAD generally mirrors these behaviors, with M05 configurations being larger than the corresponding set-ups from other templates (see e.g. LP/M05/SB vs LP/CB07/SB or CI/M05/ExD vs CI/BC03/ExD). All in all, from Fig. <ref> we see that, using the spec-z as input, the scatter of the different combinations are well confined within ∼0.2 dex and the outlier fraction is always very small (∼ 4-5%), consistently with a log-normal distribution of the uncertainties with no pathological cases across the models. Considering the whole statistical estimators, we can conclude that stellar masses from spec-z are a rather robust quantity with no signs of significant systematics, except for the LP/M05/SB model. This is consistent with findings from previous analyses also using optical + NIR photometry (e.g. Lee et al. 2010, <cit.>), although there are analyses reaching different conclusions (<cit.>). §.§.§ Using morphoto-metric redshifts We now show the results obtained using the GaZNet redshifts as fixed input in the stellar population tools. This is a critical test to check the impact of the use of noisier redshifts on the statistical estimators discussed in Sect. <ref>, and the overall variation of accuracy and precision of the estimates we might expect when applying this analysis to pure photometric datasets as the full KiDS photometric galaxy sample (see K+19 and future releases). In Fig. <ref> we show the same correlations as in Fig. <ref>, but using the GaZNet redshifts, while in Table <ref> we report the corresponding statistical estimators. In this case, we also use the LP/B03/ExD model from the spec-z as reference to check the impact of the GaZNet redshift in terms of accuracy and scatter. Basically, the results show that, for the same correlations seen in Fig. <ref>, the relative bias of the different configurations is not worsened, meaning that the accuracy of the mass estimates is not affected by the use of the morphoto-z. This is eventually a consequence of the good accuracy of these latter as seen in Fig. <ref>. On the other hand, we register an evident increase of the NMAD as a consequence of the morphoto-z intrinsic statistical errors and outlier fractions, which is also mirrored by the scatter of the residual, at the bottom of the 1-to-1 relations, which is now of the order of 0.23 dex, for log M_*/M_⊙>9, and 0.49 dex for log M_*/M_⊙<9, on average. These large scatter at low stellar masses are mainly caused by the trend we see that below log M_*/M_⊙=8.5, where stellar masses are systematically overestimated compared to those obtained with the spec-z. This is not an effect that comes from the particular set-up of the fitting procedure, as shown by the comparison of the LP/B03/ExD/morphoto-z against the same set-up with spec-z (bottom/left plot in Fig. <ref>). Even in this latter case, we see that below log M_*/M_⊙=8.5 the positive bias is similar to the ones of all other configurations. We track the motivation of this systematics to some bias of the GaZNet redshifts for a group of objects at very low redshifts (z<0.05 see Fig. <ref>), which turn-out to have also low masses. This can be due to some residual contamination from stars, not picked in the spectra classification, or just a failure of the GaZNet predictions at very low-z, which clearly impact the mass predictions. We will come back to this on Sect. <ref>. However, still looking at the LP/B03/ExD/morphoto-z vs. spec-z, above log M_*/M_⊙=8.5, the bias is almost absent and the only relevant effect is the GaZNet redshift scatter that, from the NMAD, is quantified in 0.09. This is confirmed by noticing that the general increase of the NMAD from the spectroscopic sample to the morphoto-metric sample, in Table <ref>, is compatible with the sum in quadrature of the NMAD of the former with 0.09 coming from the latter, consistently with some pseudo-Gaussian distributions. This is consistent with a log-normal distribution of the uncertainties of the stellar masses, which are confirmed by the outlier fractions that are all of the order of 5-6% above 2σ of the log M_* scatter. A more detailed discussion of the variation of the statistical estimators as a function of the sample properties is presented in Sect. <ref>. §.§.§ The impact of the nebular emissions on stellar masses As anticipated at the beginning of Sect. <ref>, we intend to check the impact of the inclusion of nebular emission on our models. Generally speaking, starforming galaxies can have their spectra heavily contaminated by nebular emissions. The most prominent ones are Lyα @λ1216Å [OII] @λ3727Å, Hβ @λ4861Å, [OIII] @λλ 4959Å and 5007Å, Hα λ6563Å. These emissions are all sparsely distributed in the optical and NIR wavelength at redshift z<1, but they are generally fainter than the continuum collected by the broad bands in this redshift range, except for strong starburst, low-mass galaxies. Here, we have the chance to estimate the impact of the presence of these emissions on the stellar masses, while we will discuss the impact on the star formation rate estimates in Sect. <ref>. We consider the options offered by the LePhare and CIGALE (see details in Sects. <ref> and <ref>) to implement the NE in the models as in Table <ref>. The results of the statistical estimators are reported in Table <ref>, between brackets, for all models considered. Here, we do not find any significant variation of the indicators of all models, which lets us to conclude that the stellar masses are poorly sensitive to the inclusion of the NE, regardless the stellar template, the SFH and the code adopted. We will keep the record of these models in the catalog and we will consider the for the discussion on the variance of the models in the discussion (Sect. <ref>). §.§ Star Formation Rates In this section, we present the results on the star formation rates. These measurements represent the current amount of stellar masses formed per unit of time corresponding to the best parameters of the assumed SFH model fitting the SED. As, by definition, the single burst models do not provide any such estimate, they will be discarded in the following analysis. For the same reason, the mixed model allowed from the M05 libraries (SB+ExD) is almost equivalent to the ExD, as it returns the same SFR estimates for the galaxies best fitted with an exponential SFH (ExD). Hence, only the latter one will be listed in the result tables and figures for the LePhare models, together with the DelEx of CIGALE. We remind here the set of τ and ages adopted for the models in Table <ref>. As seen, we have used a rather large sample of both parameters to check the impact of them on our inferences, even though some extreme values can be either slightly un-physical or too optimistic. For instance, there might be little sensitivity from the fitting procedure to effectively distinguishing between a τ=15 Gyr and 30 Gyr, both producing rather flat SFH, hence leaving a large leverage to the model to converge on both values with similar confidence. On the other hand, the stellar models can be rather insensitive to an age =0.5 Gyr, being the broad band photometry unable to catch the typical feature of young stars, and also given the very shallow limiting magnitude of the u-band that would provide most of the UV rest-frame emission of galaxies up to z=0.9. However, for this test we decide to maintain a broad range of priors for the parameter space to learn their impact and confidently optimize their choice for future analyses. As far as the output of both stellar population codes is concerned, similarly to stellar masses in Sect. <ref>, star formation rates should also be corrected for the total fluxes. This is needed to ensure that the specific star formation rate, sSFR=SFR/M_*, of a galaxy is conserved. Hence, in the following, we will correct the SFRs by the same amount of the stellar masses, i.e. log SFR_corr=log SFR_out + (log M_*,corr-log M_*,out), where M_*,out and SFR_out are the output of the SED fitting codes and M_*,corr is given by Eq. <ref>. Finally, as we want to select a star-forming sample, we will adopt a canonical cut in specific star formation rate (sSFR) to separate passive from active galaxies and use log sSFR/ yr^-1 =-11 as a threshold (see e.g., <cit.>). SSFRs lower than this value should not be taken, in principle, at face value as these correspond to a physically negligible SFR. For this reason, we do not use them in our analysis, although we report them in our catalog with the warning to use them with caution. §.§.§ Using Spectroscopic redshifts As for the stellar masses in Sect. <ref>, we first discuss the SFR results obtained using the spectroscopic redshift as fixed parameter in LePhare and CIGALE. In Fig. <ref>, we show the SFRs computed using the different libraries and SFHs as in Table <ref>. Overall the SFRs look all aligned along the 1-to-1 relation, although both LePhare and CIGALE estimates using M05 show some negative offset (more pronounced in CIGALE), as seen by the residuals shown at the bottom of each panel. Furthermore, at log SFR/M_⊙ Gyr^-1 8, the correlations show a tilt toward a positive bias, more pronounced for CIGALE, that only for CI/BC03/DelEx partially compensates the negative bias at higher SFRs. On the other hand, at log SFR/M_⊙ Gyr^-1 8, the CI/BC03/DelEx estimates are nicely consistent with the LePhare estimates of LP/BC03/ExD. Overall the two tools show a substantial agreement if they use the same libraries, while they do not seem to show a strong dependence on the SFH. This is seen from Table <ref>, showing the statistical estimators for the different experiments. Here we find, indeed, that LP/M05/ExD and CI/M05/DelEx have similar Bias, NMAD, and outlier fraction. Looking at the whole statistical estimators we can confirm that, broadly speaking, the relative bias of the SFR estimates is barely consistent with zero within the NMAD for M05, while it is well consistent with zero for the BC03. From Fig. <ref> (bottom panels), we also see that the overall scatter of the residual is of the order of 0.3 dex, slightly larger to the one of the stellar masses. Moreover, the outlier fraction, even in this case is consistent with a log-normal distribution, across the all SFR range. This broad result suggests that the SFR in star-forming galaxies is a rather stable parameter, in the redshift range we have considered. The degree of accuracy and scatter among different model and library configurations is almost comparable to the stellar masses derived from spec-z. We will now check, if this works similarly for the morphoto-metric redshifts, while we will check the impact of the NE in Sect. <ref>. §.§.§ Using morphoto-metric redshifts In Sect. <ref> we have discussed the impact of the morphoto-z from GaZNet on the stellar mass estimates and shown that the net effect of the morphoto-metric redshift is to increase the scatter and the outlier fraction of the final estimates. We have also seen that the overall impact of the GaZNet redshift can be quantified by comparing two set of estimates with same tool, stellar population library and SFH and changing only the input redshits (e.g. using the LP/BC03/ExD from morphoto-z vs. spec-z). The same trend is seen for the SFR estimates with the GaZNet redshift with respect to the spec-z, as shown in Fig. <ref>. Compared to the spec-z estimates in Fig. <ref>, we see that the scatter and the number of “large” outliers (see below) of the morphoto-z based estimates is increased with respect to the LP/BC03/ExD from spec-z. This is seen from the bottom panels with residuals in Δlog SFR (see caption), where we report an average scatter of 0.35 dex for log SFR/M_⊙ Gyr^-1>9 and 0.52 dex for log SFR/M_⊙ Gyr^-1<9. This is also quantified in Table <ref>, where we again measure an increased NMAD for all configurations. As noticed for the masses, this is compatible with a pseudo-Gaussian increase of the NMAD values of the morphoto-z estimates, being the NMAD of the LP/BC03/ExD/morphoto-z (0.163) a measure of the overall impact of the morphoto-z errors. The “gaussianity” of the log SFR distribution obtained from the morphoto-z is confirmed by the outlier fraction above the 2σ(log SFR), of the order of 5%. In the same Fig. <ref>, we also see that the bias is generally compatible with zero, except for the CI/M05/DelEx (morphoto-z). In general, a trend of the bias with the SFR is evident, due to a positive bias for the lower star formation rates (log SFR/M_⊙ Gyr^-1 8.5). Here the effect of the morphoto-z is to exacerbate the weak trend shown by the spec-z estimates, which is partially absorbed by the scatter of the residuals. Due to the well known correlation between the SFR and the stellar mass (see Sect. <ref>), we conclude that this has the same origin of the bias found for stellar masses at log M_*/M_⊙<8.5, as discussed in Sect. <ref>. We also notice a cloud of outliers at log SFR/M_⊙ Gyr^-1 10 from the GaZNets-based estimates. These come from a series of morphoto-z outliers, overestimating the intrinsic redshift of the galaxy. Indeed, the higher fictitious redshifts force the SED fitting procedure to interpret the rest frame photometry of the galaxy to be bluer and, hence, more star-forming than the result one obtains from the spec-z. To conclude the analysis of the SFRs we can say that, as for the stellar masses, these are also a rather stable quantities with respect to the fitting tool, stellar libraries and SFHs, as they do not show significant systematics, except for small SFRs, although we register a tendency of the M05 models to underestimate the SFRs with respect to the BC03. §.§.§ The impact of the nebular emissions on star formation rates We can finally check the impact of the nebular emissions on the predictions of the star formation rates from the different stellar population models considered. As done for the stellar masses in Sect. <ref>, we report the results of the main statistical indicators in Table <ref>, face-to-face with the same indicators from the no-emission models, using the GaZNet redshifts as input. As for the masses we do not see any significant change on the overall relative bias, NMAD and outlier fraction, meaning that the inclusion of the nebular emissions does not produce any relevant effect for any of the model, given the mass (log M_*/M_⊙>8.5) and redshift range (z<1) considered here. §.§ Median mass and SFR estimates A relevant result of this paper is that both the stellar mass and the star formation rate are two quantities that can robustly be constrained with seeing matched photometry covering a wide range of wavelengths, from optical to NIR (see e.g. <cit.>, <cit.>). For completeness, in <ref> we briefly test the case where only optical bands are available and compare this with results obtained in Sect. <ref> to briefly illustrate the advantage of adding the NIR to the optical bands, in terms of accuracy and precision of the stellar population estimates. By robust constraints, here we mean that the M_* and the SFR estimates do not show statistically significant “relative” bias if compared to the estimates from other tools, libraries and star formation histories. As seen in Sects. <ref> and <ref>, this is generally true for all models considered except LP/M05/SB, as this shows a relative bias of the stellar masses which is systematically larger than the scatter of the overall mass estimates (see Table <ref>, and Fig. <ref>). This makes this model an outlier with respect to all other models (see Sect. <ref>) and we decide to exclude it in the following analysis. As reference estimates we have arbitrarily chosen the LP/B03/ExD model, but this cannot be taken as ground truth. If we assume that the true values of M_* and SFR have to be found within the interval covered by the adopted models, then we can define the “median” value as a reasonable estimator of the ground truth of each of them. To deal with the low number of measurements available to compute the median, we follow the approach of <cit.> and adopt the Hodges-Lehmann estimator, defined as the median value of the means in the linear space of all the possible pairs of estimates in the sample: M_* ^ MED= median ( M_*,i+M_*,j/2 ), where the i and j indexes vary over the different models in Table <ref>. For a dataset with n measurements, the set of all possible two-element subsets, for which the median is computed, has n(n - 1)/2 elements. Similarly we will define a median star formation rate SFR ^ MED= median ( SFR_i+SFR_j/2 ). Assuming these quantities to be unbiased estimators of the ground truth, we will use them for a science validation test as in the Sect. <ref>. As a sanity check for our median estimates, as well as for individual model results, in Appendix <ref> we show a direct comparison of the M_* and SFRs versus some external catalogs overlapping with the KiDS area. In particular, we use the stellar masses from <cit.>, which makes use of ugriZ photometry, and the SFR estimates of the SDSS-DR7 galaxy sample from the MPA-JHU group[https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/sfrs.html ], based on spectroscopy data as discussed in <cit.>. The main conclusion from these comparisons is that the “median” stellar masses and the star formation rates derived from the 9-band SED fitting are generally consistent with independent estimates based on different data and techniques. This is particularly true for M_* estimates, while for SFRs we can expect some offset due to intrinsic systematics of the different proxies adopted (see also Sect. <ref>). However, in all cases the relative bias between different datasets is confined in the typical scatter of the data. § DISCUSSION In the previous sections we have assessed the accuracy and scatter of the different configurations using the relative bias, NMAD and outlier fraction as statistical estimators, and concluded that both stellar masses and SFRs are rather robust quantities. In this section we want to examine the accuracy and scatter in more details, as a function of the intrinsic properties of galaxies, like redshift, signal-to-noise ratio, and stellar mass. This will allow us to check the presence of “trends” in the systematics that might affect the stellar population parameters in different volumes of the parameter space defined by these quantities. This is fundamental if one wants to study the evolution of the mass function of scaling relations like the “main sequence” of the star-formation galaxies from the M_*- SFR relation. We also briefly discuss other sources of systematics, and finally compare the galaxy mass function and the M_*- SFR derived from our “median” parameters with previous literature, as a science validation of our inferences. §.§ Relative bias, NMAD and outliers as a function of redshift, SNR and stellar mass In Fig. <ref> we plot the bias, NMAD and outlier fraction as a function of the redshift, r-band signal-to-noise ratio (SNR) and stellar mass, for the stellar masses (left) and star formation rates (right) derived fixing the redshifts to the morphoto-z. Here, we decide to show only the GaZNet-based estimates because, as seen in Tables <ref> and <ref>, these are the estimates that, by incorporating the uncertainties of the morphoto-metric redshifts, provide the upper limits for both scatter and outlier fractions. We also show the dependence on the r-band SNR as lower limit of the photometry uncertainties (being all other bands generally less deep than the r-band), that should also enter in the precision of the stellar population parameters. We finally remark that we limit our comparison to log M_*/M_⊙>8.5, as we have seen in Sect. <ref> that, below these limit masses, the estimates are dominated by the morphoto-z biases. The first comment is that both stellar masses and star formation rates show similar features of the statistical estimators as a function of the different quantities, suggesting that the source of the biases and scatter are the same for both quantities. For stellar masses, starting from the top to the bottom, the outlier fractions stay usually within more than acceptable values at all ranges, although we see that toward low redshift (z0.05) and masses (log M_*/M_⊙9) the outlier fraction and NMAD show a systematic increase. This has been anticipated in Sects. <ref> and <ref> and tracked to an excess of outliers on the GaZNet redshifts. Similar degradation of the estimators are observed at z0.7 for M_* estimates, mainly for the poorer statistical samples, which have also degraded the redshift estimates. Overall we see that all the statistical estimators remain contained within reasonable bias (|Δ p|<0.2), NMAD (<0.3) and outlier fraction (<10%), especially having excluded the LP/M05/SB from the mass set-ups. For star formation rates, notice the bimodal behavior of the |Δ p| between the M05 and the BC03 models discussed in Sect. <ref>, with the CI/M05/DelEx model showing the largest deviation from all other models. This possibly suggests that M05 stellar libraries need to cope with more complex SFHs than the ones used here. However, the overall indicators of all models seem to stay contained in the limits of the NMAD, we have kept all of them in the SFR^MED estimates, to average off all possible systematics. For all the other estimators (NMAD and outlier fraction), we see little difference among the adopted fitting configurations, and confirm no major impact of the NE models either. To conclude, we expect we can use the “median” estimates in the full range of masses log M_*/M_⊙>8.5 and at all redshift 1 in future applications, although, for SFRs, it remains to see if the SFR^ MED is totally bias free. In the next sections, and in Appendix <ref>, we will show some evidence that this might be the case. We have also checked the statistical estimators as a function of r-mag (not shown) and we can confirm that the outlier fraction and the bias become almost out of control at r-mag 23, which sets a safe limit for future applications based on the use of current GaZNet redshifts. §.§ Some considerations about other sources of systematics Before moving to some science application we need to stress that providing a full insight into all possible systematics that might come from the stellar population analysis is beyond the purpose of this paper. We have already introduced the problem of the wavelength coverage in Sect. <ref> and addressed in <ref>. Other sources of bias one should consider are the input redshifts in the stellar population tools. As we have discussed in Sect. <ref>), we are motivated to do this because by leaving the stellar population tools to constrain redshift and stellar population properties at the same time, we expect the degeneracies between redshifts and galaxy colours to strongly affect the stellar populations. This is also briefly discussed in <ref>, where we show that the results in terms of photo-z and stellar masses are much more scattered and prone to biases than fixing the redshifts. On the other hand, we have seen in the previous sections that in case of unbiased morpho/photometric redshifts moving from spectroscopy to photometry based redshifts, the accuracy is not affected, but the scatter and the outlier fraction is increased with an acceptable level. Finally, a comment on the stellar templates. In this paper we have used a variety of libraries that could be directly incorporated in the two reference tools adopted (see Table <ref>). However, this list is neither complete, nor optimal to really account for the current state-of-art stellar population models. We expect to expand our analysis to other stellar libraries (see, e.g., MILES, <cit.>), in future analyses. In this respect, we can consider this analysis as a first step to a more general program to implement a larger variety of models to ground based multi-band datasets. §.§ Galaxy stellar mass function, star formation rate function and SFR-M_* relation We want to conclude this paper with a science validation test for the quantities we have focused on in this analysis: the “median” values, M_*^MED and SFR^MED. In Sect. <ref> we have seen that these quantities can be considered robust estimates of the stellar mass, M_*, and the star formation rate, SFR, respectively. A way to test this, is to derive the galaxy stellar mass function (GSMF), i.e. the number of galaxies in a given mass bin in a unit of volume, Φ (M), and the corresponding star formation rate function (SFRF), i.e. the number of galaxies in a given SFR bin in a unit of volume, Φ (SFR). This latter, in particular, will give us the chance to compare the SFRs derived from different indicators (UV, Hα, IR luminosities) with our estimates obtained from the KiDS 9-band photometry. We finally derive the SFR-M_* relation and compare them with independent observations to check for broad consistency of our inferences with previous literature. This will allow us to qualify the dataset based on the process presented in Sect. <ref> for future catalog compilations and science applications. Both the GSMF and the M_*-SFR relation have a crucial role in the understanding of the assembly and formation of galaxies (see discussion in Sect. <ref>) and there have been enormous progresses to trace these quantities back to the early phases of the galaxy formation (see e.g., GSMF: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, SFR-M_*: <cit.>). The SFRF is less constrained (especially for high star-forming galaxies) as it is highly dependent on the assumed methodology to obtain the galaxy SFRs (see e.g. <cit.>). For this test, we are interested to check the consistency of our derivations with previous literature in a statistical sense, while we leave the physical interpretation of these relations for a dedicated analysis, using the full KiDS photometric galaxy catalog. To avoid corrections due to the different completeness mass of the GAMA and SDSS-DR17 data in our spectroscopic sample (see Sect. <ref>), we will consider, here below, only the GAMA sub-sample. §.§.§ Galaxy Stellar Mass Function In Fig. <ref> we start by showing the stellar mass vs. redshift diagram of the GAMA galaxies in our sample. We also overplot the contour of the completeness mass, obtained from the turn-over points in number counts in a given (narrow) redshift bin (see e.g. <cit.> for more details on this method). As we can see, the completeness mass becomes almost constant to 10^11 M_⊙ at z0.4, leaving there just a small statistical samples to compare with literature. We then decide to limit our analysis at z<0.4, where we have different reference works to compare our data with. For the comparison of the GSMF, we use observations derived for the GAMA galaxies at z<0.1 (<cit.>, <cit.>) and 0.2<z<0.4 (<cit.>). In Fig. <ref> we show the GSMF from the M_*^ MED estimates, derived in the redshift bin z=0.02-0.1 and z=0.2-0.4, against the GSMF from similar redshifts for homology. In the same figure, we also show the completeness mass, defined as in Fig. <ref>. In Fig. <ref>, we do not compute the volume occupied by the complete sample of galaxies in the GAMA area, V_ max, as this would imply to know the GAMA survey selection function, which is beyond the scope of this comparison. We rather normalize the counts to match the literature GSMFs. As we see both the estimates derived by spec-z and the morphoto-z nicely follow the GSMFs of previous literature in the two redshift bins. In particular, at z<0.1 (left panel) the consistency with previous GAMA inferences from <cit.> and the recent compilation from <cit.> are almost indistinguishable for masses above the limiting mass of our spectroscopic sample, although the match becomes more insecure at very high masses, where both the exact volume adopted and the different selections can cause noisy statistics. A similar behaviour is also seen in the other redshift bin adopted (0.2<z<0.4, right panel). Here, the consistency of our GSMF with the dataset from <cit.> is yet very in the full range of masses above the completeness limit. Overall, this good match with independent GSMFs brings us to the conclusion that the stellar masses we have produced have high science fidelity to be expanded to further analyses. §.§.§ Galaxy Star Formation Rate Function Differently from the GSMF, the star formation rate function (SFRF) is not a standard proxy for galaxy evolution, although this can provide relevant insight into galaxy formation (e.g. <cit.>). One reason is that SFRs are more sensitive than stellar masses to the assumed methodology. For this reason, more focus is given to the observed values of UV, Hα or infrared (IR) luminosity functions as probes of the SFRs in galaxies (<cit.>). Despite these difficulties, there have been attempts to quantify the SFRF at different redshifts (e.g. <cit.>). At low redshifts (z<0.5) the UV and Hα data are significantly affected by dust attenuation effects (e.g. <cit.>, <cit.>). This limitation impacts the derived UV/Hα star formation rate functions which are usually incomplete at the high star-forming end (log SFR /M_⊙/Gyr^-1 10). Thus, especially for these high SFR ranges, IR SFRs are considered more robust and give a more accurate estimate of the SFRFs, at least at low redshifts (e.g. <cit.> and reference therein). Taking all this into account, in Fig. <ref> we show the SFRFs based on the “median” values derived in Sect. <ref>. We compare these SFRFs in three redshift bins consistent with other observations from <cit.>, which reports a collection of SFRFs based on UV, Hα and IR, and from <cit.>, which presents SFRs from SED fitting of a local sample of SDSS galaxies. In the figure, we can see the co-existence of SFRs based on different proxies and appreciate the large scatter introduced by the different methods. Broadly speaking, the UV- and Hα-based SFRFs are consistent between them and generally discrepant from the IR-based ones. Our SED estimates look very well consistent with the IR SFRFs, down to the “limiting SFR”, marked as a vertical dashed lines in the different redshift bins[This has been obtained following the same procedure of the stellar masses, i.e. as the peak of the SFRF. Here, though, we do not interpolate in the SFRF vs. z but we show the peak in every particular bin.]. Finally, we remark the almost perfect agreement with the SDSS SED estimates from <cit.>, especially considering our spec-z estimates. Hence, we conclude that the SFR^ MED allow us to build SFRFs which are in good agreement with previous literature based on IR luminosity function and SED fitting, while the difference with UV and Hα-based estimates have to rely to the difference in the calibration of the different methods (see e.g. <cit.>). This does not impact the fidelity of our estimates, as they show no systematics with similar (photometric) probes. As we will see in Appendix <ref>, this conclusion is corroborated by the direct comparison of the SFR^ MED estimates with spectroscopical SFRs, showing a statistically insignificant bias for the morphoto-z and no bias for the spec-z based estimates. To conclude, the consistency of both the GSMFs and SFRFs with literature further support the assumption that the “median” estimates represent a realistic proxy of the true M_* and SFRs, either using spectroscopic or morphoto-metric redshifts. In particular, the accuracy of the GMSFs and SFRFs based on morphoto-metric redshifts demonstrate that the method can be successfully extended to larger photometric KiDS galaxy collections. §.§.§ M_*-SFR relation For the M_*-SFR relation, in Fig. <ref> we also plot the results of the lower-redshift bins, where the mass completeness allows us to have a sufficient sample for a consistency check. We use, as comparison, a series of mean relations of star-forming galaxies from other literature studies in different redshift bins: namely, 1) Tortora et al. (2003, in preparation), including <cit.> based on a hybrid method using far-ultraviolet (FUV)+total infrared luminosity, 2) <cit.>, performing SED fitting using multi-band FUV-FIR, 3) <cit.>, based on a collection of homogenized literature[They calibrate to a Kroupa IMF, and the SFR estimates to the Kennicutt & Evans <cit.>. Note that the choice of IMF does not impact the M_*-SFR relation as it equally affects the stellar mass and the SFR estimates.]. We also add the prediction from Illustris-TNG (<cit.>) and EAGLE simulations (<cit.>) to illustrate the potential of deriving SFRs from larger KiDS galaxy samples to check against the outcome of state-of-art hydrodynamical simulations to gain insight on the galaxy formation scenario[We did not compare the inferred GSMF in Sect. <ref> with the same simulations because these latter are tuned, by construction, to fit the observed stellar mass functions.]. In Fig. <ref> we show the M_*-SFR relation for the median quantities obtained using the GaZNet redshifts as input only. This is because we have seen, in Sect. <ref> that these represent the worse-case scenario, where the measurements are more scattered and show systematic effects only at very low masses (log M_*/M_⊙<8.5) – these are below the completeness mass we can use as lower limit for science analysis. From Fig. <ref> we find that the M_*-SFR relation of the KiDS galaxies (black points with errorbars) nicely follows the majority of the literature data, both from observations and simulations, down to the completeness mass, despite the different methods adopted in literature and the definition of star-forming systems. At masses below the limiting mass, our M_*-SFR shows a significant departure from other relations. We expect to check if this is indicative of the presence of systematics, when we will use the full KiDS photometric sample, for which we expect to push the mass completeness to lower levels in all redshift bins. We are convinced that this consistency check, of both the M_*-SFR relation and the GSMF, which is just qualitative at this stage, confirms the validity of the procedure and the data produced in this analysis. § CONCLUSIONS AND PERSPECTIVES In this paper we have used a spectroscopic galaxy catalog including 9-band (u g r i Z Y J H K_s) photometry from the 4th data release of the Kilo-Degree Survey (KiDS) to derive robust stellar masses and star formation histories. We have performed a full template fitting analysis using two popular stellar population codes, LePhare and CIGALE, and a combination of stellar population libraries (<cit.>, <cit.>, <cit.>) and star formation histories (i.e. a single burst, an exponential decline, and a delayed exponential). Besides the spectroscopic redshifts, taken from GAMA data releases 2 and 3 and SDSS data release 17, we have considered as input of the SED fitting process, the morphoto-metric redshifts obtained from the deep learning tool GaZNet (<cit.>). In this latter case, we can perform a controlled test of the variance one would introduce, in large dataset, where only photometric redshift are available for the galaxy catalogs. In fact, the main goal of this analysis has been to assess the relative accuracy and the variance of the stellar population parameters under a variety of combinations of fitting tools/stellar templates/star formation histories. We summarize here below the main result of this analysis: 1) the stellar mass and the star formation rate show limited scatter and relative bias which is within the scatter, when comparing the estimates for every galaxies against the different methods. As such, these quantities are rather stable against the stellar template fitting set-ups; 2) the relative bias, NMAD and outlier fraction vary with the stellar mass and SNR, not with redshift; 3) due to the overall resilience of the parameters to the different variables in play, we can reasonably adopt a median definition as an unbiased estimator of the “ground truth” values for the parameters. Following <cit.>, we have used a Hodges-Lehmann median for this robust parameter estimate and used them for a science validation; 4) we have evaluated the scatter of the individual fitting set-ups with respect to the Hodges-Lehmann median (Fig. <ref>) and found that, depending on the combination of templates and star formation histories, stellar masses and star formation rates can deviate by ∼0.1 dex, for high mass systems, to ∼0.2 dex, for low mass systems; 5) as a science validation test, we have derived the stellar mass function and the star formation rate mass function, as well as he M_*-SFR relation and compared with previous literature in different redshift bins, finding a very good match with a wide literature; 6) we provide the catalog of the galaxy parameters, including stellar masses, star formation rates, age, metallicity, extinction and the τ of the exponential decaying models, for ∼290 000 galaxies with spectroscopic redshifts, 0.01<z<0.9, from GAMA and SDSS-DR17. The catalog is available at this URL[link] and contains also the 9-band GAaP photometry, the r-band MAG_AUTO, and the spectroscopic redshift from the parent spectroscopic surveys. In the future we plan to expand this test, including more stellar formation tools (e.g. FAST : <cit.>, SED3FIT: <cit.>, Prospector: <cit.>, P12 <cit.>), star formation histories (e.g. log-normal <cit.>, Γ <cit.>) and stellar libraries (e.g. <cit.>). This will allow us to investigate an even larger variety of models and use the “median” of their outcomes (see Sect. <ref>) as an unbiased stellar population parameter estimators for the full KiDS “galaxy’’ photometric sample and finally provide a general-purpose catalog to be used for a variety of galaxy studies. Piece of similar datasets have been previously used in KiDS, to study the size-mass relation of galaxies (<cit.>), the ultra-compact massive galaxies number density evolution (<cit.>), the mass function of galaxies at different redshifts (<cit.>, the clustering of red-sequence galaxies (<cit.>), the dark matter halo masses of elliptical galaxies as a function of observational quantities (<cit.>), the dark matter assembly in massive galaxies (<cit.>). § ACKNOWLEDGEMENTS NRN acknowledges financial support from the Research Fund for International Scholars of the National Science Foundation of China, grant n. 12150710511. RL acknowledges the support of the National Nature Science Foundation of China (No. 12022306) and the science research grants from the China Manned Space Project (CMS-CSST-2021-A01). AK acknowledges financial support from the One hundred top talent program of the Sun Yat-sen University. HF acknowledges the financial support of the National Natural Science Foundation of China (grant No. 12203096). LX thanks Dr. O. Ilbert for the useful suggestions about LePhare and Fucheng Zhong for useful discussions. § DATA AVAILABILITY The data that support the findings of this study are available at the URLs provided in the text. unsrt § §.§ The impact of missing NIR photometry In this Appendix we want to check the impact of the wavelength range on the analysis we have performed, and quantify, in particular, the advantage of the inclusion of the NIR to produce reliable stellar population parameters. It is well known that the wider wavelength base is a necessary prerequisite for accurate photometric redshifts (see e.g. <cit.>). As we will see in <ref>, accurate redshifts have themselves a large impact on the stellar population parameters. Here we want to show that, even assuming to correctly know the redshift of a galaxy, the wavelength baseline is crucial to provide stellar masses and SFRs with minimal bias and scatter. For space sake we just consider the extreme case of fully discarding the NIR bands, to show what is the maximum errors one would commit applying the same set-up as in Table <ref>. For the same brevity reason we show the results for 4 LePhare models: LP/B03/ExD, LP/M05/SB, LP/M05/ExD, LP/CB07/SB. In Table <ref> we report the main statistical estimators for the different configurations, for both mass and SFR estimates, either assuming the spec-z or the morphoto-z as input. These can be compared to Tables <ref> and <ref>. The most evident effect is the large increase of the scatter of the estimates, as measured from the NMAD. For stellar masses we find that the NMAD increases by 30-40% (e.g. LP/M05/ExD/spec-z) to about 100% (LP/B03/ExD/morphoto-z). On the other hand, all SB model show little increase in NMAD (10%) and smaller relative biases, indicating that these are almost insensitive to the wider wavelength baseline. For the SFRs we find a similar degradation of the precision of the estimates with NMAD in Table <ref> increased by 30% to 90% with respect to Table <ref>, and minimal variation on the relative bias. §.§ Comparison of M_* ans SFR estimates against external catalogs As anticipated in Sect. <ref>, here we want to perform a direct comparison of our M_* and SFR estimates against external catalogs. For the stellar masses, we have mentioned the existence of stellar mass catalogs based on similar KiDS data (e.g. <cit.>, <cit.>), however here we decide to compare the stellar masses with a catalog made on a different photometric data from <cit.>. The catalog of their stellar masses is available on the GAMA website[Catalog link: http://www.gama-survey.org/dr2/schema/table.php?id=179]. This is based on the ugri optical imaging from SDSS (DR7) and (according to the catalog description) Z-band from UKIDSS (see T+11 and reference therein). Similarly to us they also use BC03 templates, Chabrier IMF, and Calzetti extinction law, with an Exponentially declining star formation history, but they use a customized code for their stellar population models. Hence we can expect some differences in the estimates due to the code adopted and the data (different observations, photometric accuracy and errors etc.), while they use the GAMA spectroscopic redshifts information as input of their model. We have found a match of 64 771 galaxies with our catalog which are plotted in Fig. <ref> against the M_*^ MED estimates from Sect. <ref>, considering both the spectroscopic and the GaZNet redshifts as input. Being our LP/BC03/ExD the closest model to their set-up, we also add this for comparison in the same figure. Overall, we see that all the estimates (except the M_*^ MED/spec-z) are consistent within the errors, shown at the bottom of each panel, with a scatter that is always contained within ∼ 0.2dex for the spectroscopic redshifts and ∼ 0.25dex for the morphoto-metric redshifts. We also clearly observe that the LP/BC03/ExD has almost no bias, meaning that the different codes and also the different data have a minimal impact on the final mass estimates. The offset with the M_*^ MED (of the order of 0.15 dex) is due to the relative bias of the different models entering in the “median” quantities: in Sect. <ref> this is quantified to be ∼0.10 dex for the LP/BC03/ExD (see blue line in the 2nd row from the top of Fig. <ref>– left panel), hence consistent with the 0.15 dex offset above, considering the scatter of ∼0.2 dex in the top/left panel of Fig. <ref>. The bias with the LP/BC03/ExD model become even smaller is we use the same baseline as T+11, i.e. the 5-bands ugriZ, as shown by the orange residuals at the bottom of each panels. This also indicates that most of the effect of the NIR bands mainly impacts the massive galaxies where the difference of the masses can be as large as 0.2 dex. We still see, in all cases, the systematic deviation of the sample based on GaZNet redshifts at log M_*/M_⊙<9 discussed extensively before, depending on the redshift systematics and not on the stellar population analysis. For the SFRs we make use of the SDSS-DR7 star formation rate catalog (see footnote <ref>) based on the analysis discussed in <cit.>, but see also <cit.>. Here, the star formation rates are computed by directly fitting the emission lines (e.g., Hα, Hβ, [OIII]@λ5007, [NII]@λ6584, [OII]@λ3727, and [SII]@λ6716). This offers us the opportunity to check the presence of biases on our “median” results against spectroscopic inferences, hence based on a more robust method, especially considering the bimodal bias from M05 and BC03, discussed in Sect. <ref>. The comparison of our 9-band estimates with no-NE and the SDSS-DR7 SFRs are shown in Fig. <ref>. We decide to use the no-NE to confirm the little impact of the emission line in the SED based SFR estimated, as discussed in Sect. <ref>. In Fig. <ref>, we see that the SFR^ MED are in very good agreement with the SDSS spectroscopic inferences, with a bias which is well within the scatter of the data-points. For the morphoto-z sample we see, as usual, the positive bias at low star formation rates induced by the systematics in the morphoto-metric redshifts at low-SFR values, as discussed in Sect. <ref>, although here the offset starts to become significan at log SFR/M_⊙ Gyr^-1 9, suggesting that the different methods (e.g. emission lines vs. SED fitting) can introduce some biases (see also Sect. <ref>). The scatter remains always confined within ∼ 0.4 dex (see values in the figure insets), in line with the results also discussed in the same Sect. <ref>, at least for higher SFRs (log SFR/M_⊙ Gyr^-1>9). We believe the evidences collected in this appendix, for both stellar masses and SFRs, support all the main conclusions of the paper about the robustness of the stellar population quantities from the different methods/set-ups and the use of the “median” values as an unbiased estimator of the true quantities for our galaxy sample, given the range of redshifts adopted. §.§ LePhare results with redshift as free parameter Both SED fitting tools, LePhare and CIGALE, can use the redshift as free parameter during the fitting procedure. This gives us the chance to have a direct visualization of the degeneracies in the final results, introduced by missing the information about accurate galaxy redshifts. For this particular test we use LePhare to show the impact on the estimates of the stellar masses. We use the reference set-up, i.e. the LP/BC03/ExD, which becomes LP/BC03/ExD/specz for the case with spec-z fixed and LP/BC03/ExD/zfree in the variation with the redshift as free paramater. In Fig. <ref> we show: 1) on the left panel the spec-z vs the photometric redshift inferred by LePhare, photo-z_ LP, and 2) on the right, the corresponding stellar masses. From this figure, we can clearly see the impact of missing the information on redshift in the stellar population analysis, in comparison with the equivalent quantities obtained for the GaZNet morphoto-z (Fig. <ref> and Fig. <ref>, bottom left). This is also quantifies in the residual plots at the bottom of Fig. <ref>, where we plot the relative bias and scatter both for the photo-z_ LP and the GaZNet redshifts inferences. In particular, the stellar masses in the former case show a bias and scatter that is fully driven by the larger variance of the photometric redshits: see, e.g., the cloud of galaxies with masses almost parallel to the 1-to-1 relation with a large positive offset on the top of the figure, which are absent in the GaZNet-based estimates. This is confirmed by the global statistical estimators, as for the redshifts we have μ=0.005, NMAD=0.039 and Out. frac.=2.3%, i.e. much larger than the same quantities derived for the GaZNet redshift in Fig. <ref> (μ=0.005, NMAD=0.017 and Out. frac.=0.4%, respectively) which are mirrored by a similar worsening of the same estimators from the masses that for the z_ LP case are Δ p=-0.077, NMAD=0.197 and Out. frac.=4.7% which are up to twice as much larger than typical values found for the GaZNet morphoto-z equivalent values in Table <ref> (Δ p=-0.033, NMAD=0.093 and Out. frac.=3.8%). This quantifies the advantage of having accurate photo-z in the stellar population analysis.
http://arxiv.org/abs/2307.04302v1
20230710014146
Auction Design for Value Maximizers with Budget and Return-on-spend Constraints
[ "Pinyan Lu", "Chenyang Xu", "Ruilong Zhang" ]
cs.GT
[ "cs.GT" ]
Auction Design for Value Maximizers Pinyan Lu, Chenyang Xu and Ruilong Zhang ITCS, Shanghai University of Finance and Economics, China Huawei TCS Lab, China Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, China Department of Computer Science and Engineering, University at Buffalo, USA [email protected], [email protected], [email protected] Auction Design for Value Maximizers with Budget and Return-on-spend ConstraintsAll authors (ordered alphabetically) have equal contributions and are corresponding authors. Pinyan Lu1,2 Chenyang Xu3 Ruilong Zhang4 August 12, 2023 =========================================================================================================================================================================== The paper designs revenue-maximizing auction mechanisms for agents who aim to maximize their total obtained values rather than the classical quasi-linear utilities. Several models have been proposed to capture the behaviors of such agents in the literature. In the paper, we consider the model where agents are subject to budget and return-on-spend constraints. The budget constraint of an agent limits the maximum payment she can afford, while the return-on-spend constraint means that the ratio of the total obtained value (return) to the total payment (spend) cannot be lower than the targeted bar set by the agent. The problem was first coined by <cit.>. In their work, only Bayesian mechanisms were considered. We initiate the study of the problem in the worst-case model and compare the revenue of our mechanisms to an offline optimal solution, the most ambitious benchmark. The paper distinguishes two main auction settings based on the accessibility of agents' information: fully private and partially private. In the fully private setting, an agent's valuation, budget, and target bar are all private. We show that if agents are unit-demand, constant approximation mechanisms can be obtained; while for additive agents, there exists a mechanism that achieves a constant approximation ratio under a large market assumption. The partially private setting is the setting considered in the previous work <cit.> where only the agents' target bars are private. We show that in this setting, the approximation ratio of the single-item auction can be further improved, and a (1/√(n))-approximation mechanism can be derived for additive agents. § INTRODUCTION In an auction with n agents and m items, the auctioneer decides the allocation ={x_ij}_i∈ [n],j∈ [m] of the items and the agents' payments ={p_i}_i∈ [n]. The agent i's obtained value is usually denoted by a valuation function v_i of the allocation; while the agent's utility depends on both the obtained value and the payment made to the auctioneer. Combining the valuation and payment to get the final utility function is a tricky modeling problem. In the classic auction theory and the vast majority of literature from the algorithmic game theory community, one uses the quasi-linear utility function u_i=v_i-p_i, i.e., the utility is simply the obtained value subtracting the payment. This natural definition admits many elegant mathematical properties and thus has been widely investigated in the literature (e.g. <cit.>). However, as argued in some economical literature <cit.>, this utility function may fail to capture the agents' behaviors and thus cannot fit reality well in some circumstances. In these circumstances, one usually uses a generic function u=f(v,p) (with monotonicity and possibly convexity properties) to model the utility function. Such treatment is surely general enough, but usually not explicitly enough to get a clear conclusion. In particular, designing non-trivial truthful mechanisms for agents with a generic and inexplicit utility function is difficult. Is there some other explicit utility function (beyond the quasi-linear one) that appears in some real applications? One simple and well-studied model is agents with budget constraints (e.g. <cit.>). In this setting, besides the valuation function, an agent i also has a budget constraint B_i for the maximum payment he can make. In the formal language, the utility is u_i:= { v_i -p_i if p_i≤ B_i , -∞ otherwise. . In mechanism design, the valuation function v_i(·) is considered as the private information for agent i. Thus the auctioneer needs to design a truthful mechanism to incentivize the agents to report their true information. For these models beyond the simplest quasi-linear utility, other parameters might be involved in the agents’ utility functions besides the valuation function, such as budget B in the above example. For the mechanism design problem faced by the auctioneer, one can naturally ask the question of whether these additional parameters are public information or private. Both cases can be studied, and usually, the private information setting is more realistic and, at the same time, much more challenging. This is the case for the budget constraint agents. Both public budget and private budget models are studied in the literature (e.g. <cit.>). Value Maximizer. The above budget constraint agent is only slightly beyond the quasi-linear model since it is still a quasi-linear function as long as the payment is within the budget. However, it is not uncommon that their objective is to maximize the valuation alone rather than the difference between valuation and payment for budget-constrained agents. This is because in many scenarios the objective/KPI for the department/agent/person who really decides the bidding strategy is indeed the final value obtained. On the other hand, they cannot collect the remaining unspent money by themselves anyway, and as a result, they do not care about the payment that much as long as it is within the budget given to them. For example, in a company or government's procurement process, the agent may be only concerned with whether the procurement budget can generate the maximum possible value. We notice that with the development of modern auto-bidding systems, value maximization is becoming the prevalent behavior model for the bidders <cit.>. This motivates the study of value maximizer agents, another interesting explicit non-quasi-linear utility model. In many such applications, there is another return-on-spend (RoS) constraint τ_i for each agent i which represents the targeted minimum ratio between the obtained value and the payment and is referred to as the target ratio in the following. Formally, the utility function is u_i:= { v_i if p_i≤ B_i and p_iτ_i ≤ v_i , -∞ otherwise. . As one can see, the value maximizer's utility function is still a function of v and p but with two additional parameters B and τ, which result from two constraints. Note that the above utility function is identical to that of <cit.>. Their paper focused on one particular setting where both value and budget are public information, with RoS parameter τ being the only single-dimensional private information. Considering τ as the only private information helps design better auctions, but it may fail to capture more wide applications. On top of capturing more practical applications, we consider the setting where all these pieces of information are private, which we call the fully private setting. This makes designing an efficient auction for the problem challenging. With the focus on the fully private setting, we also consider some partially private settings, for which we can design better mechanisms. There are other definitions of value maximizer in the literature, most of which can be viewed as a special case of the above model <cit.>. For example, there might be no budget constraint (B=∞) or no RoS constraint. Another example is to combine v_i/τ_i as a single value (function). A mechanism for the fully private setting in our model is automatically a mechanism with the same guarantee in all these other models. That is another reason why the fully private setting is the most general one. Revenue maximization and benchmarks This paper considers the revenue maximization objective for the auctioneer when designing truthful[A mechanism is truthful if for any agent i, reporting the true private information always maximizes the utility regardless of other agents’ reported profiles, and the utility of any truthtelling agent is non-negative.] mechanisms for value maximizers. For the revenue maximization objective, there are usually two benchmarks, called “first-best" and “second-best". The first-best benchmark refers to the optimal objective we can get if we know all the information. In our setting, it is max_∑_i min{B_i, v_i()/τ_i}. For the traditional quasi-linear utility function, the first-best benchmark is simply the maximum social welfare one can generate max_∑_i v_i(). It is proved that such a benchmark is not achievable or even not approximated by a constant ratio in the traditional setting. Thus the research there is mainly focused on the second-best benchmark. The second-best benchmark refers to the setting where the auctioneer additionally knows the distribution of each agent's private information and designs a mechanism to get the maximum expected revenue with respect to the known distribution. The benchmark in <cit.> is also this second-best benchmark and they provide optimal mechanism when the number of agents is at most two. It is clear that the first-best benchmark is more ambitious and more robust since it is prior free. They focus on the second-best in the traditional setting because the first-best is not even approximable. In our new value maximizer agents setting, we believe it is more important to investigate if we can achieve the first-best approximately. Thus, we focus on the first-best benchmark in this paper. This is significantly different from that of <cit.>. §.§ Our Results Problem Formulation. The formal description of the auction model considered in the paper follows. One auctioneer wants to distribute m heterogeneous items among n agents. Each agent i∈ [n] has a value v_ij per unit for each item j∈ [m] and a budget B_i, representing the maximum amount of money agent i can pay. The agent also has a RoS constraint τ_i, representing the minimum ratio of the received value (return) to the total payment (spend) that she can tolerate. As mentioned above, several settings of the type (public or private) of ( ,,) are considered in the paper. Agents are value maximizers subject to their budget constraints and RoS constraints (see <ref> for the formula). The auctioneer aims to design a truthful mechanism that maximizes the total payment. We investigate our model in a few important auction environments. We studied both indivisible and divisible items, both the single-item and the multiple-item auctions. When there are multiple items, we consider the two most important valuations: unit demand and additive. Unit demand models are the setting where the items are exclusive to each other. Additive models are the setting where an agent can get multiple items and their values add up. We leave the more generic valuation function, such as submodular or sub-additive, to future study. In the fully private information setting, we obtain constant approximation truthful mechanisms for both the single-item auction and the multiple items auction among unit demand agents. This is quite surprising given the fact that such a constant approximation to the first-best benchmark is proved to be impossible for the classic quasi-linear utility agents even in the single-item setting. The intuitive reason is that the agent is less sensitive to the payment in the value maximizer setting than in the quasi-linear utility setting and thus the auctioneer has a chance to extract more revenue from them. But this does not imply that designing a good truthful mechanism is easy. Quite the opposite, we need to bring in some new design and analysis ideas since the truthfulness here significantly differs from the traditional one as agents’ utility functions are different. For the additive valuation, we provide constant approximation only under an additional large market assumption. This is obtained by observing an interesting and surprising relationship between our model and the model of “liquid welfare for budget-constrained agents". We also consider the partially private information setting. For the public budget (but private value and target ratio), we obtained an improved constant approximation truthful mechanism for the single-item environment. The improved mechanism for the single-item setting has a much better approximation since we cleverly use the public budget information in the mechanism. For the additive valuation without the large market assumption, we also investigate it in the private target ratio (but public budget and valuation) setting, which is the setting used in <cit.>. we obtained an (1/√(n)) approximation truthful mechanism. In the additive setting, an agent may get multiple items, and thus the payment she saved from one item can be used for other items, which is an impossible case in the unit demand setting. Due to this reason, agents may become somewhat more sensitive to payment which leads to an (1/√(n)) approximation. §.§ Related Works The most relevant work is <cit.>, in which they also aim to design a revenue-maximizing Bayesian mechanism for value maximizers with a generic valuation and utility function under budget and RoS constraints. As mentioned above, they focus on the setting where each agent's only private information is the target ratio, which is referred to as the partially private setting in our paper. They show that under the second-best benchmark, an optimal mechanism can be obtained for the two-agent case. Another closely related line of work is “liquid welfare for budget constraint agents" <cit.>. We observe an interesting and surprising relationship between these two models since the liquid welfare benchmark is almost identical to the first-best benchmark in our setting. Therefore, some algorithmic ideas there can be adapted here. However, there are two significant differences: (1) the objective for the auctioneer is (liquid) welfare rather than revenue. This difference mainly affects the approximation; (2) the bidders are quasilinear utility (within the budget constraint) rather than value maximizers. This difference mainly affects truthfulness. Observing this relation and difference, some auction design ideas from their literature inspire part of our methods. Furthermore, building deeper connections or ideal black-box reductions between these two models would be an interesting future direction. The model of budget feasible mechanism <cit.> also models the agent as a value maximizer rather than a quasi-linear utility maximizer as long as the payment is within the budget. The difference is that the value maximizer agent is the auctioneer rather than the bidders. §.§ Paper Organization In the main body, we focus on the fully private setting, where all the budgets, valuations, and target ratios are private. We first consider the single-item auction in <ref> and then extend the algorithmic ideas to the multiple items auction for unit demand agents in <ref>. Both of the two environments can be constant-approximated. Finally, we turn to the multiple items auction for additive agents, the most challenging environment, and show a constant approximation under an assumption on the budgets in <ref>. For the partially private setting, due to space limit, we defer all the results to the appendix. In <ref>, we show that a better constant approximation for the single item environment can be obtained when the budgets become public. Then we leverage this new mechanism to give an (1/√(n)) approximation for multiple items auction among additive agents in <ref>. § WARM-UP: SINGLE ITEM AUCTION Let us warm up by considering the environment where the auctioneer has only one item to sell. Our first observation is that if the item is indivisible, we can achieve a truthful optimal solution by directly assigning the item to agent k with the maximum min{B_k,v_k/τ_k} and charging her that value. Basically, the first price auction with respect to min{ B_i,v_i/τ_i}. The optimality is obvious. For truthfulness, since min{ B_i,v_i/τ_i} is the maximum willingness-to-pay of each agent i, if someone other than k misreports the profile and gets assigned the item, one of the two constraints must be violated. On the other hand, misreporting a lower profile can only lead to a lower possibility of winning but without any benefit. There exists a truthful optimal mechanism for the single indivisible item auction. The above theorem gives some intuition for the divisible item environment. If the indivisible optimum is at least a constant fraction c of the divisible optimum, selling the item indivisibly can give a constant approximation. We refer to this idea as indivisibly selling in the following. In contrast, for the case that the indivisible optimum is smaller than a constant fraction c of the divisible optimum (denoted by in the following), we have min{ B_i,v_i/τ_i}≤ c · for any agent i. This property implies that the random sampling technique can be applied here. More specifically, we randomly divide the agents into two groups, gather information from one group, and then use the information to guide the item's selling price for the agents in the other group. Since in an optimal solution, each agent does not contribute much to the objective, a constant approximation can be proved by some concentration inequalities based on the above two strategies, we give our mechanism in <ref>. <ref> is feasible, truthful, and achieves an expected approximation ratio of 1/52. The feasibility is obvious. Firstly, since x_i≤α when each agent i comes, ∑_i∈ [n]x_i≤ 1. Secondly, due to x_i≤B_i/r for each agent i, p_i=x_i · r ≤ B_i. Thirdly, for each agent i, we have x_iv_i≥ p_iτ_i because an agent buys some fractions of the item and gets charged only if r≤v_i/τ_i. Then we show that regardless of which procedure is executed, the mechanism is truthful. The truthfulness of the first procedure is proved by <ref> directly. For the second procedure, we show that agents in neither S nor R have the incentive to lie. For an agent in S, she will not be assigned anything, and therefore, misreporting her information cannot improve her utility; while for the agents in R, they are also truthtelling because their reported information determines neither the arrival order nor the reserve price, and misreporting a higher v_i/τ_i (resp. a larger B_i) to buy more fractions of the item must violate the RoS (resp. budget) constraint of agent i. Finally, we analyze the approximation ratio. Let (^*,^*) be an optimal solution. Use and to denote the optimal payment and our total payment, respectively. Without loss of generality, we can assume that p_i^*=x_i^* ·v_i/τ_i≤ B_i. Clearly, if there exists an agent l with p_l^*≥1/36, we can easily bound the expected total payment by the first procedure: () ≥9/13·min{ B_i,v_i/τ_i}≥9/13·min{ B_l,v_l/τ_l}≥1/52. Otherwise, we have p_i^* < 1/36 ∀ i∈ [n]. Then according to the concentration lemma proved in <cit.>, we can establish the relationship between ∑_i∈ Sp_i^* and in the second procedure: [1/3≤∑_i∈ Sp_i^* ≤2/3]≥3/4. Namely, with probability of at least 3/4, both ∑_i∈ Sp_i^* and ∑_i∈ Rp_i^* are in [1/3,2/3]. Let us focus on the second procedure and consider a subset S such that ∑_i∈ Sp_i^*∈ [1/3,2/3]. We distinguish two cases based on the final remaining fraction of the item. If the item is sold out, our payment is at least 1/4∑_i∈ Sp_i(^S). Since (^S,(^S)) is the optimal solution of distributing the item among the agents in S, we have ≥1/4∑_i∈ Sp_i(^S)≥1/4∑_i∈ S p_i^* ≥1/12. If the procedure does not sell out the item, for any agent i∈ R who does not exhaust the budget, v_i/τ_i < r = 1/4∑_i∈ Sp_i(^S). Using T⊆ R to denote such agents, we have 1/3≤∑_i∈ Rp_i^* ≤∑_i∈ R∖ T B_i + ∑_i∈ T p^*_i ≤ + ∑_i∈ Tv_i/τ_i x_i^* ≤ + 1/4∑_i∈ Sp_i(^S)∑_i∈ T x_i^* ≤ + 1/4∑_i∈ Sp_i(^S) ≤ + 1/4 . We have ≥1/12 from the above inequality. Thus, in either case, is at least 1/12 under such a subset S. Then according to <ref>, we can complete the proof: () ≥4/13·3/4·1/12 = 1/52. § MULTIPLE ITEMS AUCTION FOR UNIT DEMAND AGENTS This section considers the environment where the auctioneer sells multiple items to unit-demand agents, a set of agents who each desires to buy at most one item. We extend the results in the last section and show that a constant approximation can still be obtained. Similar to the study of the single-item auction, <ref> starts from the indivisible goods environment and shows a 1/2-approximation. For the divisible goods environment, our mechanism is also a random combination of the “indivisibly selling" procedure and the “random sampling" procedure. However, the mechanism and its analysis are much more complicate than that for single item environment and this section is also the most technical part of this paper. We describe the indivisibly selling procedure in <ref>. For the random sampling procedure, the multiple-item setting needs a variant of greedy matching (<ref>) to compute the reserved prices of each item and <ref> has a discussion about this algorithm. Finally, <ref> analyzes the combined mechanism (<ref>). In order to analyse the approximation ratio of <ref>, we introduce <ref>, a non-truthful mechanism and purely in analysis, to bridge <ref> and <ref>. §.§ Indivisibly Selling We first prove the claimed truthful constant approximation in the scenario of selling indivisible items and then give two corollaries to show the performance of applying the indivisibly selling idea to distributing divisible items. Consider the indivisible goods setting. For each agent-item pair (i,j), define its weight w_ij to be the maximum money that we can charge agent i if assigning item j to her, i.e., w_ij=min{B_i,v_ij/τ_i}. Since items are indivisible and each agent only wants to buy at most one item, a feasible solution is essentially a matching between the agent set and the item set, and the goal is to find a maximum weighted matching. However, the algorithm to output the maximum weighted matching is not truthful. We observe that a natural greedy matching algorithm can return a constant approximation while retaining the truthfulness. The mechanism is described in <ref>. <ref> is feasible, truthful and achieves an approximation ratio of 1/2 when items are indivisible. The feasibility is obvious since min{B_i,v_ij/τ_i} is the maximum willingness-to-pay of agent i when adding (i,j) into the matching. To prove the truthfulness, we show that once an agent misreports the profile and obtains a higher value, either the budget constraint or the RoS constraint must be violated. Since the agent-item pairs are sorted in the decreasing lexicographical order of (min{B_i,v_ij/τ_i}, v_ij), the matched item value of agent i is non-increasing when none of the related agent-item pairs are ranked higher. Thus, once the agent misreports a profile (B_i', _i',τ_i') and gets assigned an item k with a higher value, the rank of pair (i,k) must get improved, implying that min{B_i',v'_ik/τ_i'} > min{B_i,v_ik/τ_k}. Since the mechanism charges this agent min{B_i',v'_ik/τ_i'} under the new reported profile, either the budget constraint or the RoS constraint must be unsatisfied. Finally, we prove the approximation ratio by the standard analysis of the greedy matching algorithm. For each pair (i,j) in an optimal matching, there must exist a pair (either (i,j') or (i',j)) in the greedy matching whose weight is at least c_ij. Thus, the maximum matching weight is at most twice the weight of our matching, and <ref> gets a 1/2-approximation. Consider a feasible solution ={z_ij}_i∈ [n],j∈ [m] (not necessarily truthful) for multiple divisible items auction among unit-demand agents. We assume that each unit-demand agent i has at most one variable z_ij>0, Define _j() := ∑_i:z_ij>0 p_i to be the total payments related to item j. We observe the following two corollaries. If solution is α-approximation and for any item j, max_i∈ [n]min{v_ij/τ_iz_ij,B_i}≥β·_j(), then running <ref> directly obtains an approximation ratio of αβ/2. For a constant β∈ [0,1], define item subset H(,β)⊆ [m] to be the set of items with max_i∈ [n]min{v_ij/τ_iz_ij,B_i}≥β·_j(). Running <ref> directly obtains a total payment at least β/2∑_j∈ H(,β)_j() for any β∈ [0,1]. §.§ Foundations of Random Sampling The subsection explores generalizing the random sampling procedure in <ref> to multiple items auction. We first randomly sample half of the agents and investigate how much revenue can be earned per unit of each item if the auctioneer only sells the items to these sampled agents. Recall that the mechanism does not actually distribute any item to the sampled agents. Then, the auctioneer sets the reserve price of each item based on the investigated revenues and sells them to all the remaining agents. More specifically, let these agents arrive in a fixed order. When an agent arrives, she is allowed to buy any remaining fraction of any item as long as she can afford the reserve price. It is easy to observe that the mechanism is still truthful according to the same argument in the proof of <ref>: for a sampled agent, she will not be assigned anything, and therefore, she does not have any incentive to lie; while for the agents that do not get sampled, they are also truthtelling because neither the arrival order nor the reserve prices are determined by their reported profiles and a fake profile that can improve the agent's obtained value must violate at least one constraint. The key condition that random sampling can achieve a constant approximation ratio is that the revenue earned by each item among the sampled agents is (w.h.p.) close to its contribution to the objective in an optimal solution or a constant approximation solution; otherwise, there is no reason that the reserve prices are set based on the investigated revenues. Unfortunately, unlike the single-item environment, we cannot guarantee that an optimal solution of the multiple items auction satisfies this condition. Thus, to obtain such a nice structural property, we present an algorithm based on greedy matching and item supply clipping in <ref>. Note that this algorithm is untruthful, and we only use it to simulate the auction among the sampled agents. We first prove that it obtains a constant approximation, and then show several nice structural properties of the algorithm. The approximation ratio of <ref> is 1/6. Use (^*={x_ij^*},^*={p_i^*}) and (={x_ij}_i∈ [n], j∈ [m], ={p_i}) to represent the allocations and the payments in an optimal solution and <ref>'s solution respectively. Without loss of generality, we can assume that p_i^*=∑_j∈ [m]x_ij^*w_ij≤ B_i for any i∈ [n]. For each item j∈ [m], define A_j to be the set of agents who buy some fractions of item j in the optimal solution, i.e., A_j := { i∈ [n] | x_ij^*>0 }, and then based on , we partition A_j into three groups: A_j^(1) = { i∈ [n] | x_ij > 0 }, A_j^(2) = { i∈ [n] | x_ij = 0 due to R_j≤ 1/2 }, A_j^(3) = { i∈ [n] | x_ij = 0 due to agent i has bought another item}. Note that if some agent does not buy the item j in due to both of the two reasons, we add the agent into an arbitrary one of A_j^(2) and A_j^(3). Use and to denote the objective values of the optimal solution and our solution, respectively. Based on the partition mentioned above, we split the optimal objective into three parts: = ∑_i∈ [n], j∈ [m] x_ij^*w_ij = ∑_j∈ [m]∑_i∈ A_j^(1) x_ij^*w_ij + ∑_j∈ [m]∑_i∈ A_j^(2) x_ij^*w_ij +∑_j∈ [m]∑_i∈ A_j^(3) x_ij^*w_ij . In the following, we analyze the three parts one by one and show that each part is at most twice , which implies that is 1/6 approximation. Due to the definition of A_j^(1), for each (i,j) pair in the first part, <ref> assigns some fractions of item j to agent i, and therefore, x_ij≥min{1/2,B_i/w_ij}. Since x_ij^*≤ 1 and we assume w.l.o.g. that x_ij^*≤B_i/w_ij, we have ∑_j∈ [m]∑_i∈ A_j^(1) x_ij^*w_ij ≤∑_j∈ [m]∑_i∈ A_j^(1)min{w_ij,B_i}≤∑_j∈ [m]∑_i∈ A_j^(1) 2w_ijmin{1/2,B_i/w_ij} ≤∑_j∈ [m]∑_i∈ A_j^(1) 2x_ijw_ij≤ 2 . For each item j with non-empty A_j^(2), <ref> must sell at least half of the item, and then due to the greedy property of the algorithm, we have ∑_i∈ A_j^(2) x_ij^*w_ij≤ 2_j(), recalling that _j()=∑_i:x_ij>0p_i. Thus, ∑_j∈ [m]∑_i∈ A_j^(2) x_ij^*w_ij≤∑_j∈ [m] 2_j() ≤ 2 . Finally, for each item j and agent i∈ A_j^(3), suppose that agent i buys some fractions of item j' in solution . Due to the greedy property, w_ij≤ w_ij'. Hence, x^*_ijw_ij≤min{B_i,w_ij'}≤ 2 min{B_i/w_ij;,1/2} w_ij'≤ 2x_ij'w_ij'=2p_i. Summing over these (i,j) pairs, ∑_j∈ [m]∑_i∈ A_j^(3) x_ij^*w_ij = ∑_i∈ [n]∑_j:i∈ A_j^(3) x_ij^*w_ij≤∑_i∈ [n] 2p_i = 2. Combining <ref>, <ref> and <ref> completes the proof. Note that the item supply clipping parameter 1/2 in <ref> can be replaced by any other constant in (0,1). By setting this parameter to be √(2)/1+√(2), the algorithm can get an approximation ratio of 3+2√(2). For an agent subset S⊆ [n], use (^S, ^S) to denote the allocation and the payments if using <ref> to distribute all the items to agents in S. We claim the following lemma. For any agent subset S⊆ [n], we have * agent payment monotonicity: p_i^S≥1/2 p_i, ∀ i∈ S. * selling revenue monotonicity: _j(^S) ≤ 2_j(), ∀ j∈ [m]. Use R_j(i,k) and R_j^S(i,k) to denote the remaining fractions of item j at the end of pair (i,k)'s iteration when running <ref> for all the agents and for the agent subset S, respectively. Note that if i∉S, the corresponding iterations are viewed as empty iterations. We first show a key lemma that helps prove the two properties. Consider an agent i and let k and k' be the items that she buys in and ^S respectively. We have ∀ j∈ [m], max{ R_j(i,k),1/2}≤max{ R_j^S(i,k'),1/2}. We first show that for any pair (i,k) and any item j, max{ R_j(i,k),1/2}≤max{ R_j^S(i,k),1/2}, Assume for contradiction that <ref> is violated for some agent-item pairs. Let (i,k) be the first such pair in the order stated in <ref>. Notice that in this iteration, only the remaining fraction of item k could change. We distinguish three cases: (1) x_ik^S=0, (2) x_ik^S>0 and x_ik > 0, and (3) x_ik^S>0 and x_ik = 0. With some abuse of notation, we use R_j^-(i,k) (resp. R_j^S-(i,k)) to denote the remaining fraction of item j at the beginning of the iteration. For case (1), the remaining fraction R_k^S remains unchanged. Thus, max{ R_k^-(i,k),1/2}≥max{ R_k(i,k),1/2} > max{ R_k^S(i,k),1/2} = max{ R_k^S-(i,k),1/2}, contradicting the assumption that (i,k) is the first such pair. For case (2), we have x_ik^S=min{R_k^S-(i,k), B_i/w_ik} and x_ik=min{R_k^-(i,k), B_i/w_ik} according to the algorithm. If x_ik = R_k^-(i,k), then clearly, R_k^-(i,k) becomes 0 and  <ref> certainly holds; while if x_ik=B_i/w_ik, we have x_ik^S ≤B_i/w_ik= x_ik, and R_k^S(i,k) = R_k^S-(i,k) - x_ik^S≥ R_k^-(i,k)-x_ik=R_k(i,k), contradicting the definition of pair (i,k). For case (3), if x_ik = 0 is due to R_j^-(i,k)<1/2, it is impossible that <ref> gets violated. Hence, the only reason that x_ik = 0, in this case, is that agent i has bought another item k'. This implies that in the iteration of pair (i,k'), we have x_ik'^S=0 and x_ik' > 0. Since agent i had not bought any item that time, the only reason for x_ik'^S=0 is that R_k'^S-(i,k')< 1/2. Due to the definition of (i,k) and the fact that (i,k') is in front of (i,k) in the order, we have R_k'^-(i,k') ≤max{ R_k'^-(i,k'),1/2}≤max{ R_k'^S-(i,k'),1/2} = 1/2, contradicting to x_ik' > 0. Thus, <ref> holds for any agent-item pair. Then due to the same argument in the analysis of case (3) above, we see that (i,k') must be in front of (i,k) in the order, implying that R_j^S(i,k)≤ R_j^S(i,k'). Finally, max{ R_j(i,k),1/2}≤max{ R_j^S(i,k),1/2}≤max{ R_j^S(i,k'),1/2}. We build on <ref> to prove the two properties one by one. Consider an agent i∈ S and let k and k' be the items that she buys in and ^S respectively (w.l.o.g., we can assume that each agent always buys something by adding some dummy items with value 0.). Due to <ref> and the greedy property of <ref>, we have w_ik'≥ w_ik. Thus, p_i^S = w_ik'· x^S_ik'≥ w_ik'·min{B_i/w_ik', 1/2} ≥ w_ik·1/2·min{B_i/w_ik, 1 }≥ w_ik·1/2· x_ik ≥1/2p_i, which proves the agent payment monotonicity. Now we prove the selling revenue monotonicity. Consider an arbitrary item j. Use A_j^S and A_j to denote the agents who buy some fractions of item j in solution ^S and , respectively. Further, let l^S and l be the last buyer in A_j^S and A_j, respectively. According to the assignment rule in the algorithm, for each agent i∈ A_j^S ∩ A_j ∖{l}, we have x_ij^S ≤B_i/w_ij = x_ij; while for agent l, similar to the analysis in the last paragraph, x_lj^S ≤min{B_i/w_ij,1}≤ 2min{B_i/w_ij,1/2}≤ 2x_il. Thus, if A_j^S ⊆ A_j, clearly, we have _j(^S)=∑_i∈ A_j^Sx_ij^Sw_ij≤ 2∑_i∈ A_j^Sx_ijw_ij≤ 2_j(). It remains to show the case that A_j^S ∖ A_j ≠∅. For an agent i∈ A_j^S ∖ A_j, we have x_ij^S>0 but x_ij=0. Again, due to <ref>, we see the only reason is that in the process of computing solution , the remaining fraction of item j in that iteration is less than 1/2; otherwise, agent i must buy an item with a larger weight in solution ^S. Then due to the greedy property, we have ∀ i∈ A_j^S ∖ A_j, w_ij≤min_i'∈ A_j w_i'j = w_lj. Thus, the property can be proved: _j(^S) =∑_i∈ A_j^S∩ A_j ∖{l}x_ij^Sw_ij + ∑_i∈ A_j^S∖ A_j x_ij^Sw_ij + x_lj^Sw_lj ≤∑_i∈ A_j^S∩ A_j ∖{l}x^S_ijw_ij + (∑_i∈(A_j^S∖ A_j ) ∪{l}x^S_ij)· w_lj ≤∑_i∈ A_j^S∩ A_j ∖{l}x^S_ijw_ij + (1-∑_i∈ A_j^S∩ A_j ∖{l}x^S_ij)· w_lj ≤∑_i∈ A_j^S∩ A_j ∖{l}x_ijw_ij + (1-∑_i∈ A_j^S∩ A_j ∖{l}x_ij)· w_lj ≤ 2_j(), where the last inequality used the fact that at least half of the item has been sold out in solution . By <ref>, we have the following corollary. Randomly dividing all the agents with equal probability into set S and R, we have (∑_j∈ [m]_j(^S) ) = (∑_i∈ S p^S_i ) ≥1/2(∑_i∈ S p_i ) = 1/4∑_i∈ [m] p_i≥1/4∑_j∈ [m]_j(). §.§ Final Mechanism This subsection states the final mechanism, which is a random combination of the indivisibly selling idea and the random sampling idea. To streamline the analysis, we first introduce an auxiliary mechanism which is constant-approximate but not truthful, and then show it can be altered to a truthful mechanism by losing only a constant factor on the approximation ratio. <ref> obtains a constant approximation ratio. Recollect that H(,β):={ j∈ [m] |max_i∈ [n]min{v_ij/τ_iz_ij,B_i}≥β·_j() } defined in <ref>. To prove <ref>, we partition all the items into two sets: H(,1/144) and (,1/144) = [m] ∖ H(,1/144). <ref> directly implies that the first procedure (<ref>) guarantees our objective value is at least a constant fraction of ∑_j∈ H(,1/144)_j(). The revenue obtained by the first procedure in <ref> is at least 1/288∑_j∈ H(,1/144)_j(). For the second procedure, we show that ∑_j∈(,1/144)_j() can be bounded by the total payment obtained by this procedure. More specifically, we prove the following technical lemma. The expected revenue obtained by the second procedure in <ref> is at least 1/192∑_j∈(,1/144)_j() - 7/96∑_j∈ H(,1/144)_j(). Let F and D be the set of items that are sold out and the set of agents that use up their budgets in our solution, respectively. According to <ref>, for a pair (i,k(i)), if i∉ D and k(i)∉ F, w_i,k(i) < r_k(i) = 1/12_j(^S). We observe two lower bounds of the objective value of our solution: ≥∑_j∈ F1/12_j(^S), and ≥∑_i∈ D B_i ≥∑_j∉ F∑_i∈ D z_ijw_ij = ∑_j∉ F( ∑_i∈ R z_ijw_ij- ∑_i∈ R∖ D z_ijw_ij) ≥∑_j∉ Fmax{0,( ∑_i∈ R z_ijw_ij- 1/12_j(^S) )}, where the last inequality used <ref>. For simplicity, use _j(∩ S) to denote ∑_i∈ S z_ijw_ij. Combing the two lower bounds, we have 2≥∑_j∈ F1/12_j(^S) + ∑_j∉ Fmax{0,( _j(∩ R)- 1/12_j(^S) )}, and thus, 2() ≥∑_j∈ [m](_j∈ F·1/12_j(^S) + _j∉ F·( _j(∩ R)- 1/12_j(^S) ) ), where _(·) is an indicator function of the event (·). According to the definition of (,1/144), Chebyshev's inequality and the concentration lemma <cit.>, for any item j∈(,1/144), we have [1/3_j() ≤_j(∩ S) ≤2/3_j()] ≥15/16, which implies that with high probability, _j(∩ R)- 1/12_j(^S) ≥1/3_j() -1/12_j(^S) ≥1/12_j(^S), where the last inequality used the selling revenue monotonicity. Use Π_j to denote the event that the sampled subset S satisfies 1/3_j() ≤_j(∩ S) ≤2/3_j(). Combining <ref> and <ref>, 2() ≥∑_ j∈(,1/144) [Π_j]·(_j∈ F·1/12_j(^S) + _j∉ F·( _j(∩ R)- 1/12_j(^S) ) | Π_j ) ≥1/12·∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j). We continue to find a lower bound of ∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j). Observe that ∑_j∈ [m]( _j(^S) ) = ∑_j∈ H(,1/144)( _j(^S) ) + ∑_j∈(,1/144)( _j(^S) ) = ∑_j∈ H(,1/144)( _j(^S) ) + ∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j) + ∑_j∈(,1/144)[⌝Π_j]·( _j(^S) | ⌝Π_j) ≤ ∑_j∈ H(,1/144)( _j(^S) ) + ∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j) + 1/16·∑_j∈(,1/144)( _j(^S) | ⌝Π_j) ≤ ∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j) + 2∑_j∈ H(,1/144)_j() + 1/8∑_j∈(,1/144)_j() Combining the above inequality and <ref>, we get the lower bound: ∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j) + 2∑_j∈ H(,1/144)_j() + 1/8∑_j∈(,1/144)_j() ≥1/4∑_j∈ [m]_j() ∑_j∈(,1/144)[Π_j]·( _j(^S) | Π_j) ≥1/8∑_j∈(,1/144)_j() - 7/4∑_j∈ H(,1/144)_j() . Thus, due to <ref>, we complete the proof: () ≥1/192∑_j∈(,1/144)_j()- 7/96∑_j∈ H(,1/144)_j() Combing <ref>, <ref> and the probabilities set in <ref>, () ≥45/47·1/288∑_j∈ H(,1/144)_j() + 2/47·( 1/192∑_j∈(,1/144)_j() - 7/96∑_j∈ H(,1/144)_j() ) = 1/4512∑_j∈ [m]_j() ≥1/27072, where the last inequality used <ref>. Finally, we present our final mechanism in <ref>. The only difference from <ref> is that in the last step of the second procedure, we let the agent choose any item she wants as long as she can afford the reserve price, and then charge her the maximum willingness-to-pay. <ref> is feasible, truthful, and constant-approximate. According to <ref>, the first procedure is feasible and truthful. For the second procedure, the mechanism is truthful since any agent is charged her maximum willingness-to-pay. Then according to the same argument in the proof of <ref>, we can prove the truthfulness. The following focuses on analyzing the approximation ratio. To this end, we couple the randomness in  <ref> and <ref>. The two algorithms are almost identical to each other except for one line and their randomness can be coupled perfectly. If by the coupling of randomness, both algorithms execute the first procedure, they are exactly identical and thus <ref> also apples to  <ref>. Now, by randomness, they both execute the second procedure. In the second procedure, we can further couple the randomness so that they randomly sample the same set S. Conditional on all these (they both execute the second procedure and sample the same set S), we prove that the revenue of  <ref> is at least 1/4 of that of  <ref>. Let (,) and (',') be the two solutions respectively of  <ref> and <ref> under the above conditions. Let and ' be their revenues respectively. Use A_j' to denote the agents who buy some fractions of item j in solution '. According to <ref>, ' = ∑_j∈ [m]∑_i∈ A_j'x_ij' r_j. For an item j, if the corresponding revenue in <ref>'s solution _j() ≥1/2r_j, we have ∑_i∈ A_j'x_ij' r_j ≤ 2_j(), and then summing over all such items, ∑_j:_j() ≥1/2r_j∑_i∈ A_j'x_ij' r_j ≤∑_j:_j() ≥1/2r_j 2_j() ≤ 2. For each item j with _j() < 1/2r_j, we distinguish three cases for agents in A_j' based on (,): (1) p_i=B_i, (2) p_i<B_i and x_ij>0, and (3) p_i<B_i and x_ij=0. For case (1), clearly, x_ij'r_j ≤ B_i = p_i . For case (2), since _j() < 1/2r_j, the remaining fraction of item j is at least 1/2 when <ref> let agent i buy, and therefore, x_ij≥min{1/2,B_i/ r_j }. According to p_i<B_i, we have p_i=w_ijx_ij≥ r_j min{1/2,B_i/ r_j }. Then, due to x_ij'≤min{ 1,B_i/ r_j }, x_ij'r_j≤ 2p_i. For case (3), suppose that agent i buys item k in solution . Since the remaining fraction of item j is at least 1/2 and the agent always pick the most profitable part in <ref>, we have min{1/2, B_i/r_j}· v_ij≤ x_ik v_ikmin{1/2, B_i/r_j}· w_ij≤ x_ik w_ik. Again, due to p_i<B_i, r_j ≤ w_ij and x_ij'≤min{ 1,B_i/ r_j }, we have 1/2x_ij'r_j ≤min{1/2, B_i/r_j}· w_ij≤ x_ik w_ik = p_i. Due to <ref>, <ref> and <ref>, for an item with _j() < 1/2r_j, in either case, we always have x_ij'r_j ≤ 2p_i. Thus, summing over all such items and the corresponding agents, ∑_j:_j() < 1/2r_j∑_i∈ A_j' x_ij'r_j ≤∑_j:_j() < 1/2r_j∑_i∈ A_j' 2p_i ≤ 2. Combining <ref> and <ref> proves '≤ 4. Combining this with <ref>, we know that The expected revenue obtained by the second procedure in <ref> is at least 1/768∑_j∈(,1/144)_j() - 7/384∑_j∈ H(,1/144)_j(). Further combining with  <ref>, which we argued also applies to <ref>, we have the expected revenue obtained by <ref> is at least 45/53·1/288∑_j∈ H(,1/144)_j() + 8/53·( 1/768∑_j∈(,1/144)_j() - 7/384∑_j∈ H(,1/144)_j() ) = 1/5088∑_j∈ [m]_j() ≥1/30528. In the proof of <ref>, we have not tried to optimize the constants in our analysis in the interests of expositional simplicity. The parameters (e.g. 45/47 and 1/144) in our algorithm and analysis can be easily replaced by some other constants in (0,1) to obtain another constant approximation ratio. § MULTIPLE ITEMS AUCTION FOR ADDITIVE AGENTS This section studies the setting where the auctioneer has multiple items to sell and the agents are additive, that is, everyone can buy multiple items and obtain the sum value of the items. This environment is more challenging than the previous one, and some algorithmic ideas introduced in the last section are hard to apply. For example, one of the most critical components of <ref> is indivisibly selling, which is based on the observation that selling indivisible goods to unit-demand agents is much easier than selling divisible goods. However, this is not true in the additive valuation environment. To quickly see this, suppose that we have an approximation mechanism for selling indivisible goods to additive agents. Then we can obtain a mechanism with almost the same approximation ratio by splitting each item into tiny sub-items and selling them indivisibly. Thus, in the additive valuation environment, selling indivisible items is harder than selling divisible items. Fortunately, we find that the idea of random sampling still works in this environment. Due to the relationship between our model and the liquid welfare maximizing model, the theoretical guarantee of the random sampling mechanism in <cit.> directly implies a constant approximation of our problem under a large market assumption on the agents' budgets (that is, B_i≤/m · c for any agent i, where c is a sufficiently large constant). This part is technically simple. We only state the theorem and the high-level idea here and defer some details to <ref>. There exists a truthful constant approximation for multiple items auction among additive agents under the large market assumption. For an instance = (,,) of our model, we can easily construct a liquid welfare maximization instance '=(','), where for each agent i, the budget B_i'=B_i and the valuation w_ij'=v_ij/τ_i ∀ j∈ [m]. Since given the same allocation, the maximum willingness-to-pay of an agent in is exactly the agent's liquid welfare in ', we see that the two instances share the same offline optimal objective values. Our mechanism is simply running the random sampling mechanism proposed in <cit.>[See <ref> for the description of the mechanism] on the reduced instance '. <cit.> showed that when the agents are quasi-linear utility maximizers subject to budget constraints, the total revenue obtained by the mechanism is at least a constant fraction of the optimal objective. We note that the behavior of a value maximizer in the random sampling mechanism is different from the behavior of a quasi-linear utility maximizer. Thus, we cannot directly say that the proof has been completed due to ()=('). The mechanism lets the agents come in a fixed order and allows each arrived agent to buy any fraction of the items she wants at the reserve prices. A quasi-linear utility maximizer will never buy any fraction of the items with reserve prices higher than the valuations (over the target ratio). However, a value maximizer may be interested in buying such items because the overall RoS constraint can still be satisfied even if for some items, the bought prices are higher than the valuations (over the target ratio). We complete the proof by showing that the revenue obtained among value maximizers is always at least that obtained among quasi-linear utility maximizers. The key observation is that When an agent comes, regardless of the type, she computes a knapsack optimization problem with constraints. In other words, the agent sorts all the available items in the decreasing order of the ratio of valuation w'_ij to the reserve price r_j, and then buys them sequentially as long as the constraints are satisfied. For a quasi-linear utility maximizer, she will keep buying until the budget is exhausted or all the remaining items have w'_ij/r_i<1; while a value maximizer may not stop immediately at the time that all the remaining items have w'_ij/r_i<1 when she still has budget left, instead, she will continue buying until the budget is used up or the overall RoS constraint is about to be violated. According to the above argument, it is easy to verify that the sold fraction of each item is non-decreasing when the agent becomes a value maximizer, and thus, the total revenue obtained among value maximizers is non-decreasing and can be bounded by a constant fraction of (). § CONCLUSION AND OPEN PROBLEMS We investigate the emerging value maximizer in recent literature but also significantly depart from their modeling. We believe that the model and benchmark proposed in this paper are, on the one hand, more realistic and, on the other hand, friendlier to the AGT community. We get a few non-trivial positive results which indicate that this model and benchmark is indeed tractable. There are also many more open questions left. For additive valuation, it is open if we can get a constant approximation. It is interesting to design a mechanism with a better approximation for the setting of the single item and unit demand since our current ratio is fairly large. We also want to point out that no lower bound is obtained in this model, and thus any non-trivial lower bound is interesting. We get a much better approximation ratio for the single-item environment when valuation and budget are public than in the fully private setting. However, this is not a separation since we have no lower bound. Any separation result for different information models in terms of public and private is interesting. § ACKNOWLEDGMENT Chenyang Xu and Pinyan Lu were supported in part by Science and Technology Innovation 2030 –“The Next Generation of Artificial Intelligence” Major Project No.2018AAA0100900. Additionally, Chenyang Xu received support from the Dean's Fund of Shanghai Key Laboratory of Trustworthy Computing, East China Normal University. Ruilong Zhang was supported by NSF grant CCF-1844890. splncs04 § PARTIALLY PRIVATE SETTING This section studies a partially private setting proposed by <cit.> where the budgets and values are all public. We first show that a better constant approximation for the single item auction can be obtained when the budgets become public in <ref>. Then we build on the new single item auction to give an (1/√(n)) approximation for multiple items auction among additive agents with public budgets and values in <ref>. §.§ Single Item Auction with Public Budgets This subsection improves upon the previous approximation in <ref> when the agents' budgets become public. The high-level idea is similar to the uniform price auction for liquid welfare maximization proposed in <cit.>, which is allocating the item according to the maximum selling price such that if all agents buy the item at this price per unit, the item is guaranteed to be sold out. Such a selling price is referred to be a market clearing price. However, new truthfulness challenges arise when applying the market clearing price idea to our auction environment. For example, there may exist such a case that the market clearing price remains unchanged when some agent changes the reported profile. Then in this case, the agent may misreport a lower target ratio or a larger value to obtain more goods without violating any constraint. To solve this issue, we use a simple scaling technique to partition the agents into two levels according to their reported profile and let the agents in the lower level buy the item at the market clearing price while the agents in the higher level have to pay a slightly higher price. The agent who determines the market clearing price always stays in the lower level, and she can obtain more goods only if she jumps into the higher level by increasing the reported v_i/τ_i. However, in that case, the agent needs to pay a higher price that violates her RoS constraint. Thus, the agent has no incentive to misreport a lower ratio. The detailed mechanism is stated in <ref>. This subsection aims to show the following theorem. For any parameter ϵ>0, <ref> is truthful and achieves an approximation ratio of 1/(1+ϵ)(2+ϵ), which tends to 1/2 when ϵ approaches 0. We first show that the allocation satisfies the budget constraint and the reported RoS constraint of each agent, then discuss the truthfulness, and finally give the analysis of the approximation ratio. Given any , for each agent i, we have p_i≤ B_i and τ_i p_i ≤ x_i v_i. We discuss case by case. If B[k]>w_k+1, for an agent i≤ k, we have p_i = B_i · C[k]/(1+ϵ)B[k] < B_i, and p_i/x_i = C[k] ≤ w_k≤ w_i≤v_i/τ_i. The first inequality in the second formula used the fact that k is an index with B[k] ≤ w_k and w_k is an exponential multiple of 1+ϵ. For all other agents, obviously, the two constraints are satisfied since their payments are 0. Consider the case that B[k]≤ w_k+1. For an agent i≤ k, clearly, the budget constraint is satisfied. If w_i > w_k+1, since each w_i is an exponential multiple of 1+ϵ, we have w_i ≥ (1+ϵ)w_k+1, and p_i/x_i = (1+ϵ)w_k+1≤ w_i≤v_i/τ_i. Otherwise, we have w_i = w_k+1 and p_i/x_i = w_k+1 = w_i≤v_i/τ_i. For agent k+1, the budget constraint holds because for index k+1, B[k+1]>w_k+1 (otherwise, k is not the maximum index with B[k]≤ w_k). More specifically, p_k+1 = w_k+1-B[k]/1+ϵ = B_k+1+w_k+1-B[k+1]/1+ϵ < B_k+1. The RoS constraint is also easy to show: p_i/x_i≤ w_k+1≤v_i/τ_i. Finally, for all other agents, the two constraints are satisfied since their payments are 0. Then we prove the truthfulness. Notice that changing the reported profile may change the indices of the agents in step <ref>. To avoid confusion, we use agent a to represent a certain agent. We first show that any agent a will not misreport a lower v_a/τ_a because when v_a/τ_a becomes smaller, x_a cannot increase (<ref>); and then build on the RoS constraints to prove the other hand (<ref>). For any agent a, x_a is non-increasing as v_a/τ_a decreases. Given a reported profile (,), refer to = max{B[k],w_k+1} as the market clearing price. Decreasing v_a/τ_a unilaterally may change the value of k, the top-k agents S, the index π(a) of agent a, and the market clearing price . Use k', S', π'(a) and ' to denote the three terms respectively after decreasing v_a/τ_a to v_a'/τ_a'. Clearly, if the current index π(a) is already larger than k, x_a is either 0 or 1/1+ϵ-B[k]/(1+ϵ)w_k+1, and will not increase as v_a/τ_a decreases. Thus, we only need to consider the case that π(a) is at most k, i.e., x_a= B_a/(1+ϵ). Due to the observation that min_i∈ S∖{a} w_i ≥min_i∈ S w_i ≥∑_i∈ S B_i > ∑_i∈ S∖{a} B_i, we have k'≥ k-1 and S∖{a}⊆ S' after decreasing v_a/τ_a. If k'=k-1, w.l.o.g., we can assume that the new index π'(a) is k'+1 and the new market clearing price ' is w'_a; otherwise, agent a obtains nothing. Let agent b be the (k+1)-th player when the reported profile is (,). Since π'(a)=k'+1=k, agent a still ranks higher than agent b, i.e., w'_a≥ w_b. Then according to the definition of k', we see that the market clearing price decreases: = ∑_i∈ S B_i > w'_a = '. Thus, x_a' = 1/1+ϵ - ∑_i∈ S∖{a}B_i/(1+ϵ)' < 1/1+ϵ - ∑_i∈ S∖{a}B_i/(1+ϵ) = ∑_i∈ S B_i - ∑_i∈ S∖{a}B_i /(1+ϵ) = x_a. For the case that k' ≥ k, we claim that either agent a or agent b is contained in S'. Suppose that b∉ S'. Since only agent a changes the reported profile, it is easy to verify that k'=k and S'=S, implying that '==max{∑_i∈ SB_i, w_b} and x_a'=x_a. If b ∈ S', due to the fact that ∑_i∈ S∖{a} B_i + B_a +B_b > w_b (the definition of k), agent a can not belong to S'. Without loss of generality, assume that π'(a)=k'+1 and '=w'_a; otherwise, x'_a=0. We also see that the market clearing price is non-increasing: ≥ w_b ≥'. Thus, x'_a = 1/1+ϵ-∑_i∈ S'B_i/(1+ϵ)'≤1/1+ϵ-∑_i∈ S∖{a} B_i + B_b/(1+ϵ) = -∑_i∈ S∖{a} B_i - B_b/(1+ϵ). Regardless of whether takes the value w_b or ∑_i∈ SB_i, we always have - ∑_i∈ S∖{a} B_i - B_b < B_a, which implies that x'_a<x_a and completes the proof. Consider any agent a and any v_a'/τ_a'> v_a/τ_a. If x'_a >x_a, then v_a x'_a < τ_a p'_a. Use to denote the market clearing price when the reported profile is (,). Clearly, if w_a >, we have x'_a = x_a for any v_a'/τ_a'> v_a/τ_a. In other words, x'_a>x_a happens only when w_a ≤. We distinguish two cases. First, if w_a <, the current price of the item (for agent a) is at least (1+ϵ)w_a. Noticing that increasing v_a/τ_a cannot decrease the price, we have p_a/x_a≥ (1+ϵ)w_a > v_a/τ_a. For the case that w_a=. Since <ref> breaks the ties in a fixed manner, x_a increases only when agent a jumps to the higher level, i.e., w_a' >. Thus, according to the payment rule, we still have p_a/x_a≥ (1+ϵ)w_a > v_a/τ_a. Combining <ref> and <ref> proves the truthfulness of the mechanism. Finally, we analyze the approximation ratio of the mechanism. <ref> is 1/(1+ϵ)(2+ϵ)-approximation. The proof is technically simple and similar to the analysis in <cit.>. Use and to denote the optimal payment and our payment respectively. We first give an upper bound of and then establish the relationship between the upper bound and . For the top-k agents, due to the budget constraints, the optimal mechanism charges them at most B[k]; while for all the remaining agents, due to the RoS constraints, the optimal mechanism charges them at most max_i>kv_i/τ_i≤ (1+ϵ)w_k+1. Namely, ≤ B[k] + (1+ϵ)w_k+1. Then we analyze . If B[k] > w_k+1, our total payment is = ∑_i∈[k] p_i = ∑_i∈ [k]B_i· C[k]/(1+ϵ)B[k]≥B[k]/(1+ϵ) > w_k+1/(1+ϵ); while if B[k]≤ w_k+1, the total payment is = ∑_i∈[k] p_i + p_k+1≥B[k]/1+ϵ + w_k+1/1+ϵ - B[k]/1+ϵ = w_k+1/1+ϵ≥B[k]/1+ϵ. Thus, in either case, we have (1+ϵ) + (1+ϵ)^2 > > /(1+ϵ)(2+ϵ). §.§ Multiple Items Auction for Additive Agents In this subsection, we build on the aforementioned single-item auction to give a truthful mechanism for multiple-items auction. The mechanism is described in <ref>. One critical part of the mechanism is that it splits the budget of each agent and runs <ref> for each item to get solution (,()). We observe that although each single item auction is truthful individually, outputting (,()) directly gives an untruthful mechanism. An agent may misreport a lower target ratio to obtain more value because even if for some item j, the RoS constraint is violated (i.e., ∃ j∈ [m], v_ijz_ij/ p_i(_j) < τ_i), it is possible that the overall RoS constraint still holds when summing over all items because the return-on-spend ratio v_ijz_ij/ p_i(_j) of each bought item j is different. A natural idea to handle this issue is raising the purchase prices of some items for an agent to guarantee that the agent's return-on-spend ratio of each bought item equals min_j:p_i(_j)>0v_ijz_ij/ p_i(_j) so that once the agent violates the RoS constraint on some item, the overall RoS constraint must also be violated. Following this line, since the purchase prices are raised, to maintain the budget constraints, we need to reduce the number of items assigned to each agent. Thus, in <ref>, we introduce T_i(j) and let agent i buy at most z_ij fraction of any item j'∈ T_i(j). Finally, to maximize the total revenue, the mechanism charges each agent her maximum willingness-to-pay. We state the main theorem in the following. <ref> is feasible, truthful, and obtains an approximation ratio of (1/√(n)) when the budget profile and the value profile are public. §.§.§ Feasibility and Truthfulness We start by proving the feasibility and the truthfulness of the mechanism. For each item j∈ [m], <ref> satisfies the unit item supply constraint: ∑_i∈ [n] x_ij≤ 1. For each agent i∈ [n], the mechanism satisfies the budget constraint and the RoS constraint: p_i≤ B_i and τ_ip_i ≤∑_j∈[m]x_ijv_ij. For each item j, since _j is the assignments returned by running <ref> and applying an item supply clipping, we have ∑_i∈ [n]z_ij≤ 1. According to the definition of T_i(h(i)), for any item j∈ T_i(h(i)), z_ij≥ z_i,h(i) and thus, x_ij = z_i,h(i)≤ z_ij, proving that the unit item supply constraints are satisfied. For each agent i, the mechanism charges her min{ B_i,U_i(h(i))/τ_i}. According to the definition of U_i(h(i)), we see that this is exactly the total value of the obtained items. Hence, the mechanism satisfies the budget constraint and the RoS constraint. Similar to the last subsection, we use two lemmas to prove the truthfulness. For any agent i, ∑_j∈ [m]v_ijx_ij is non-increasing as τ_i increases. For each agent-item pair (i,j), according to <ref>, z_ij is non-increasing as τ_i increases, which implies that U_i(j) is also non-increasing. Since h(i) is the item that obtains the maximum value of U_i(j), U_i(h(i)) is non-increasing. As mentioned above, U_i(h(i)) is exactly the total obtained value. Thus, we have ∑_j∈ [m]v_ijx_ij = U_i(h(i)) is non-increasing as τ_i increases. Consider any agent i and any τ'_i < τ_i. If ∑_j∈ [m]v_ijx'_ij>∑_j∈ [m]v_ijx_ij, then ∑_j∈ [m]v_ijx'_ij< τ_i p'_i. Consider an agent i and any τ'_i < τ_i, if ∑_j∈ [m]v_ijx'_ij>∑_j∈ [m]v_ijx_ij, there must exist at least one item l∈ T'_i(h'(i)) such that z'_il> z_il≥ 0; otherwise, the agent cannot obtain more valuable items. According to <ref>, we have p_i'(_l')/z'_il > v_il/τ_i. Consider the following payment rule: for each item j, we charge the agent q'_ij = x'_ij·p_i'(_l')/z'_il·v_ij/v_il. Clearly, this payment rule violates the RoS constraint for any item j: q'_ij/x'_ij = p_i'(_l')/z'_il·v_ij/v_il > v_il/τ_i·v_ij/v_il = v_ij/τ_i, and thus, ∑_j∈ [m] q'_ij > ∑_j∈[m] v_ijx'_ij/τ_i. Finally, we show that p'_i = min{ B_i,U'_i(h'(i))/τ'_i}≥∑_j∈ [m] q'_ij. According to <ref>, the single item auction mechanism satisfies p_i'(_l')≤ B_il and p_i'(_l') ≤v_ilz_il'/τ_i'. Thus, for each item j∈ T'_i(h'(i)), due to x_ij' ≤ z'_il and B_il/v_il = B_ij/v_ij, we have q'_ij = x'_ij·p_i'(_l')/z'_il·v_ij/v_il≤ B_ij, and q'_ij = x'_ij·p_i'(_l')/z'_il·v_ij/v_il≤ x_ij'·v_ij/τ_i'. Summing over all the items, ∑_j∈ [m]q_ij' ≤min{ B_i,∑_j∈ [m]x_ij'v_ij/τ_i'} = p_i' , completing the proof. <ref> prevents an agent from misreporting a target ratio higher than the actual ratio since the agent is a value maximizer, while <ref> guarantees that the agent cannot misreport a ratio lower than the actual ratio because otherwise, her RoS constraint will be violated. Thus, combing these two lemmas proves the truthfulness[We can also claim that <ref> and <ref> immediately prove the truthfulness according to <cit.>]. §.§.§ Approximation Ratio This subsection analyzes the approximation ratio of <ref>. As mentioned above, at the beginning stage of the mechanism, we split the budget of each agent based on the value profile. To streamline the analysis, we consider the setting where each agent i can only use the sub-budget B_ij to buy some fractions of each item j. Use to denote the optimal objective of this sub-budget constrained setting. According to the approximation ratio of <ref> (<ref>) and the item supply clipping bar 1/2, we have ∑_i∈ [n],j∈[m] z_ij· p_i(_j) ≥1/2(1+ϵ)(2+ϵ)· for any ϵ>0. This inequality splits our proof into two parts. We first show that is at least 1/2√(n)+3·, and then establish the relationship between our objective value and ∑_i∈ [n],j∈[m] z_ij p_i(_j). ≥1/2√(n)+3· Instead of comparing and directly, we introduce a simple greedy algorithm for the sub-budget constrained setting in <ref> and show that the objective obtained by the algorithm is at least 1/2√(n)+3·. Use (,) and (^*,^* ) to represent the solution of <ref> and the optimal solution (of the original setting) respectively. We partition all the agents into two groups: S:={i∈ [n] | p_i ≥ B_i/ √(n)} and R:={i∈ [n] | p_i < B_i/ √(n)}, and get an upper bound of : = ∑_i∈[n],j∈[m] x_ij^*w_ij≤∑_i∈ S B_i + ∑_j∈ [m]∑_i∈ R x_ij^*w_ij≤√(n)· + ∑_j∈ [m]∑_i∈ R x_ij^*w_ij . The remaining part is to prove that ∑_j∈ [m]∑_i∈ R x_ij^*w_ij can also be bounded by O(√(n)) ·. For each item j, define a(j) := _i∈ R w_ij to be the agent i∈ R with the maximum w_ij. Clearly, ∑_j∈ [m]∑_i∈ R x_ij^*w_ij≤∑_j∈ [m] w_a(j),j . We further partition all the items into two groups based on their assignments in the greedy solution: P:= {j∈ [m] | x_a(j),jw_a(j),j < B_a(j),j} and Q:={j∈ [m] | x_a(j),jw_a(j),j = B_a(j),j}. For each item j∈ P, if sorting all agents in the decreasing order of {w_ij}, agent a(j) is either the last agent who buys item j in <ref>, or ranks behind the last agent buying item j; otherwise, agent a(j) must exhaust the sub-budget B_a(j),j. Thus, w_a(j),j≤_j(), and therefore, ∑_j∈ P w_a(j),j≤∑_j∈ P_j() ≤ . For the items in Q, we reorganize the corresponding formula: ∑_j∈ Q w_a(j),j = ∑_i∈ R ∑_j∈ Q : a(j)=i w_ij. For simplicity, use Q(i) to denote the item subset {j∈ Q | a(j)=i }. We aim to show that ∀ i∈ R, ∑_j∈ Q(i) w_ij is at most / (√(n)-1), and thus, their sum can be bounded by O(√(n)) ·. For each agent i∈ R, due to the similar argument in the last paragraph, we have ∑_j∉ Q(i) w_ij≤∑_j∉ Q(i)_j() ≤ . Recall that any agent i ∈ R pays less than B_i/√(n). It is easy to observe that for an agent i∈ R, the sum budget of the items in Q(i) is very limited because the agent spends very little compared to the budget even though she has exhausted the sub-budgets of these items. More formally, we have ∑_j∈ Q(i) B_ij < B_i/√(n) ∑_j∈ Q(i) B_i ·v_ij/∑_j'∈ [m]v_ij' ≤B_i/√(n) ∑_j∈ Q(i) w_ij/∑_j∈ Q(i) w_ij + ∑_j∉ Q(i) w_ij ≤1/√(n) ∑_j∈ Q(i) w_ij ≤1/√(n)-1∑_j∉ Q(i) w_ij . Combing <ref>, <ref> and <ref> and then summing over all agents in R, we have ∑_j∈ Q w_a(j),j = ∑_i∈ R ∑_j∈ Q(i) w_ij≤n/√(n)-1· . Finally, combing <ref>, <ref>, <ref> and <ref> completes the proof: ≤( √(n) + 1 + n/√(n)-1) ·≤ (2√(n) + 3) . For any ϵ>0, ≥min{1/2,1/1+ϵ}·∑_i∈ [n],j∈[m] z_ij p_i(_j) We prove the lemma by showing that for any agent i, p_i≥min{1/2,1/1+ϵ}·∑_j∈[m] z_ij p_i(_j) . Consider an arbitrary agent i. Use g(i) to denote the item j with the minimum non-zero z_ij, i.e., g(i):=_j: z_ij>0 z_ij. We construct an auxiliary allocation {y_ij}_j∈ [m] and payment q_i as follows: * For each item j, set y_ij=z_i,g(i) if j∈ T_i(g(i)) and 0 otherwise. * Find the most cost-effective available item l:=_j∈ T_i(g(i)) p_i(_j) /z_ijv_ij and set q_i = ∑_j∈ [m] y_ij· p_i(_l) /z_ilv_il· v_ij . Similar with the last part analysis in the proof of <ref>, we see that payment q_i is at most min{B_i,U_i(g(i))/τ_i}, and therefore, q_i ≤min{B_i,U_i(g(i))/τ_i}≤min{B_i, U_i(h(i))/τ_i} = p_i, where the second inequality used the fact that h(i):= _j∈ [m] U_i(j). Now we show that q_i is at least a constant fraction of ∑_j∈[m] z_ij p_i(_j). Noting that g(i) is the item with the minimum non-zero z-value, ∑_j∈[m] z_ij· p_i(_j) = ∑_j∈ T_i(g(i)) z_ij· p_i(_j). We distinguish two cases based on the value of z_i,g(i): (1) z_i,g(i)≥ 1/2, (2) z_i,g(i) < 1/2. If z_i,g(i)≥ 1/2, we have y_ij· p_i(_l) /z_ilv_il· v_ij≥1/2· p_i(_j) /z_ijv_ij· v_ij≥1/2· z_ij· p_i(_j) for any item j∈ T_i(g(i)). For the second case, due to the item supply clipping in <ref>, agent i must be one of the top-k agents in the single-item auction that sells item g(i). Thus, according to <ref>, we have p_i(_g(i)) ≥B_i,g(i)/1+ϵ. Thus, for any item j∈ T_i(g(i)), y_ij· p_i(_l) /z_ilv_il· v_ij ≥ z_i,g(i)· p_i(_g(i)) /z_i,g(i)v_i,g(i)· v_ij ≥B_i,g(i)/1+ϵ·v_ij/v_i,g(i) = B_ij/1+ϵ ≥1/1+ϵ· z_ij· p_i(_j). Thus, in either case, we have p_i ≥ q_i =∑_j∈ [m] y_ij· p_i(_l) /z_ilv_il· v_ij≥min{1/2,1/1+ϵ}·∑_j∈ [m] z_ij· p_i(_j), which completes the proof. Combining <ref>, <ref> and <ref> proves an approximation ratio of (1/√(n) ). § OMITTED DETAILS IN SECTION <REF> In this section, we restate the random sampling mechanism proposed in <cit.> and the results they obtained. The mechanism is described in <ref>. The random sampling mechanism is a universal truthful budget feasible mechanism which guarantees a constant fraction of the liquid welfare under the large market assumption. The correctness of the above theorem heavily depends on <cit.>, which states that the liquid welfare obtained from the random sampling algorithm is at least some constant fraction of the optimal mechanism. To prove <cit.>, they use the revenue obtained by a truthful auction as a lower bound of the liquid welfare. Thus, <cit.> actually holds for the revenue maximization objective. Hence, we have the following corollary. The random sampling mechanism is a budget feasible and truthful mechanism which achieves a constant approximation under the large market assumption. Suppose that there is a random sampling mechanism for the liquid welfare maximizing model, whose input is the budget profile ={B_i}_i∈ [n] and the value profile = {w_ij}_i∈ [n],j∈ [m]. Our mechanism ' is constructed as follows. Given an input profile (,,), define = {w_ij=v_ij/τ_i}_i∈ [n], j∈ [m]. Then, run mechanism on the input (,) to get the allocation . Finally, we charge each agent i p_i=min{B_i, ∑_jv_ijx_ij/τ_i}. Essentially, mechanism ' is constructed by simply changing the payment of each agent i in to her maximum willingness-to-pay min{B_i, ∑_jv_ijx_ij/τ_i}. According to the arguments in <ref>, for the random sampling mechanism, the truthfulness can always be guaranteed. For the approximation ratio, we observe that the new payment rule does not violate the budget constraints and the ROS constraints, and guarantees that the total payment in ' equals the liquid welfare obtained by . Since the constructed liquid welfare instance and our instance share the same optimal objective value, <ref> can be proved directly by the following theorem. There exists a random sampling mechanism which is a universal truthful budget feasible mechanism and guarantees a constant fraction of the liquid welfare under the large market assumption.
http://arxiv.org/abs/2307.03946v1
20230708100056
Superconducting Gap Structure of Filled Skutterudite LaOs$_4$As$_{12}$ Compound through $μ$SR Investigations
[ "A. Bhattacharyya", "D. T. Adroja", "A. D. Hillier", "P. K. Biswas" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mtrl-sci", "cond-mat.str-el" ]
[email protected] Department of Physics, Ramakrishna Mission Vivekananda Educational and Research Institute, Belur Math, Howrah 711202, West Bengal, India [email protected] ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot Oxon, OX11 0QX, United Kingdom Highly Correlated Matter Research Group, Physics Department, University of Johannesburg, PO Box 524, Auckland Park 2006, South Africa ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot Oxon, OX11 0QX, United Kingdom Deceased ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot Oxon, OX11 0QX, United Kingdom Filled skutterudite compounds have gained attention recently as an innovative platforms for studying intriguing low-temperature superconducting properties. Regarding the symmetry of the superconducting gap, contradicting findings from several experiments have been made for LaRu_4As_12 and its isoelectronic counterpart, LaOs_4As_12. In this vein, we report comprehensive bulk and microscopic results on LaOs_4As_12 utilizing specific heat analysis and muon-spin rotation/relaxation (μSR) measurements. Bulk superconductivity with T_C = 3.2 K was confirmed by heat capacity. The superconducting ground state of the filled-skutterudite LaOs_4As_12 compound is found to have two key characteristics: superfluid density exhibits saturation type behavior at low temperature, which points to a fully gapped superconductivity with gap value of 2Δ/k_BT_C = 3.26; additionally, the superconducting state does not show any sign of spontaneous magnetic field, supporting the preservation of time-reversal symmetry. These results open the door for the development of La-based skutterudites as special probes for examining the interplay of single- and multiband superconductivity in classical electron–phonon systems. Superconducting Gap Structure of Filled Skutterudite LaOs_4As_12 Compound through μSR Investigations P. K. Biswas August 12, 2023 ==================================================================================================== § INTRODUCTION Due to their potential as thermoelectric materials for either refrigeration or power generation applications, many filled skutterudite compounds with RT_4X_12 stoichiometry (R = alkali metals, alkaline earth metals, lanthanides, or light actinides; T = Fe, Os, Ru; X = P, As, Sb) have lately been the focus of several investigations <cit.>. With two formula units RT_4X_12 per unit cell, these compounds form a body-centered cubic structure (space group Im3̅, No: 204). The structures consist of rigid covalently bonded cage-forming frameworks T_4X_12 that encapsulate various bonded guest atoms R. This leads to local anharmonic thermal vibrations (rattling modes), which would reduce phononic heat conduction and open the door to their potential as promising thermoelectric materials. Because of the significant hybridization between the 4f band manifold and electronic conduction states, as well as the degree of freedom provided by the R-f-derived multipole momenta of the cubically symmetric X_12 cages, those compounds may include a variety of distinct electronic and magnetic ground states. For examples, consider unconventional superconductivity <cit.>, Kondo effect <cit.>, heavy fermios <cit.>, non-Fermi liquid behavior <cit.>, etc. The majority of the Pr- and Ce-based filled skutterudite compounds are hybridized gap semiconductors or show magnetic transitions, however PrOs_4Sb_12 <cit.>, PrRu_4Sb_12 <cit.> and PrRu_4As_12 <cit.> show superconducting transitions at 1.8 K, 0.97 K and 2.4 K, respectively. PrOs_4Sb_12 is highly intriguing for a variety of reasons <cit.>, including: (i) it is the first known example of a heavy-fermion superconductor containing Pr; (ii) it shows unconventional strong-coupling superconductivity that breaks time-reversal symmetry; and (iii) instead of magnetic fluctuations, electric quadrupole fluctuations may be involved in the superconducting pairing process. The unique band structure of these compounds and the hybridization effects between localized f electrons and conduction electrons appear to play a crucial role, in addition to the fact that the origin of the majority of those unconventional phenomenologies is unknown. It was recently revealed that the Fermi level of La compounds is placed at a prominent peak arising from the T-d band manifold, which might contribute to electronic instability <cit.>. Several La-based compounds LaT_4X_12 are especially remarkable within the filled skutterudite class due to their remarkable superconducting properties. For examples, LaFe_4P_12 (T_C = 4.1 K) <cit.>, LaOs_4P_12 (T_C = 1.8 K) <cit.>, and LaRu_4Sb_12 (T_C = 3.6 K) <cit.>, with a special attention to the LaRu_4As_12 (T_C = 10.3 K, H_c2 = 10.2 T)- with the highest superconducting transition temperature. <cit.>. The ratio of the heat capacity jump Δ C to γT_C is ΔC/(γT_C)=1.75 for LaRu_4As_12 comparison to the BCS value of 1.43 <cit.>. While the majority of La-based filled skutterudites are completely gapped superconductors, past research has shown numerous unique aspects of LaRu_4As_12, such as a positive curvature of H_c2, nonexponential behavior of the electronic heat capacity, and square root field dependency of the Sommerfeld coefficient (γ) <cit.>. We recently reported unambiguous evidence of multiband s+s-wave superconductivity in LaRu_4As_12 using muon-spin rotation measurements, with 2Δ_1/k_BT_C = 3.73 for the larger gap and 2Δ_ 2/k_BT_C = 0.144 for the smaller gap <cit.>. Furthermore, inelastic X-ray scattering experiments indicated essentially temperature-independent phonon modes between 300 K and 20 K, with the exception of 2 K, where a weak softening of the specific phonon modes is detected <cit.>. All of these results demonstrate the relevance of the electron–phonon interaction in the superconductivity of LaRu_4As_12, and they accord well with the DFT-based phonon simulations <cit.>. Another isostructural La-based filled skutterudite compound, LaOs_4As_12, has been reported by Shirotani et al. to exhibit superconductivity with T_C. = 3.2 K <cit.>. LaOs_4As_12 has also shown some signs of multiband superconductivity, such as the upward curving of the upper critical field around the transition temperature and unusual behavior in the electronic specific heat data <cit.>. A single-gap, s-wave superconducting ground state, however, is suggested by a recent study of the temperature dependency of lower critical field <cit.>. Another study found that the high-amplitude lanthanum phonons dominate the vibrational eigenmodes at low energies based on the phonon dispersion relation determined from inelastic neutron scattering experiments <cit.>. We have thus performed systematic muon-spin rotation and relaxation (μSR) measurements to examine the superconducting pairing process in the LaOs_4As_12 compound. Contrary to prior experimental work asserting two-band superconductivity <cit.>, we demonstrate that the low-temperature behavior of the superfluid density points to a fully gapped superconducting Fermi surface. Furthermore, the preservation of time-reversal symmetry is confirmed by the lack of spontaneous magnetic fields in the superconducting state, ruling out unusual pairing processes. The transition from two-band to single-band superconductivity in LaRu_4As_12 to LaOs_4As_12 is caused by differences in interband coupling strength in the Fermi surface, as evidenced by the different degrees of hybridization and electronic properties observed in the Fermi surfaces of both compounds <cit.>. These results underline the significance of LaRu_4As_12 and LaOs_4As_12 compounds as an important platform for investigating filled skutterudites for the competition between single-band and multiband superconductivity in electron–phonon driven systems. § EXPERIMENTAL DETAILS The high-temperature molten-metal-flux technique, described in <cit.>, was used to grow single crystals of LaOs_4As_12. In a quartz ampule, elements with purities higher than 99.9% and a molar ratio of La:Os:Cd:As → 1:4:12:48 were combined. The details on the single crystal growth can be found in <cit.>. The relaxation approach was used to measure the heat capacity in a Quantum Design physical properties measurement (PPMS) system. Temperatures as low as 0.38 K were attained utilizing a He-3 attachment to the PPMS <cit.>. The μSR measurements were carried out using small size unaligned single crystals (0.1 mm × 0.1 mm × 0.1 mm, total mass 1 g), which gave powder average muon signal, of LaOs_4As_12. The MuSR spectrometer at the Rutherford Appleton Laboratory, ISIS Neutron and Muon Source in the UK was used to perform the μSR measurements <cit.>. In a μSR experiment, the sample is injected with 100% spin-polarized muons. Each implanted muon thermalizes, at which point it decays (lifetime τ_μ = 2.2 μs) into a positron (and two neutrinos) which is preferentially released in the direction of the muon spin at the moment of decay. Utilizing detectors carefully placed around the sample, the decay positrons are detected and time-stamped. It is possible to calculate the asymmetry in the positron emission as a function of time, A(t), using the collected histograms from the forward (F) and backward (B) detectors, A(t)=N_F(t)-α N_B(t)/N_F(t)+α N_B(t), where α is a calibration factor for the instrument and N_F(t) and N_B(t) are the number of positrons counted in the forward and backward detectors, respectively. Detectors are placed longitudinally during ZF-μSR, and a correction coil is used to cancel out any stray magnetic fields up to 10^-4 mT. To investigate the time reversal symmetry ZF-μSR measurements were carried out <cit.>. In the vortex state, TF-μSR measurements were performed with applied fields of 20, 30, 40, 50, and 60 mT, which is greater than the lower critical field H_c1 (∼5 mT) and lower than the upper critical field H_c2 (∼1 T) <cit.>. The sample was covered using a thin silver foil after being mounted onto a high purity (99.995%) silver sample holder using diluted GE-varnish. The sample was cool down to 300 mK using a dilution refrigerator. To generate the vertex lattice by trapping the applied TF, we applied field above T_C and then sample was cooled in the field to the base temperature of 300 mK. We used WiMDA <cit.> software to analyze the μSR data. § RESULTS AND DISCUSSION §.§ Crystal Structure & Physical Properties LaOs_4As_12 crystallizes in a CoAs_3-type skutterudite structure packed with La atoms and has a body-centered cubic structure with the space group Im3̅ (No. 204) as shown in Figure <ref>. The large icosahedron cage made of As atoms is located around the electropositive La sites, which lack four-fold rotational symmetry. Between the cages, a transition metal ion called Os forms a cubic sublattice. The low temperature specific heat measurements C_P as a function of temperature at zero magnetic field are shown in the inset of Figure <ref>a. Using the equations C_P = γ T + β T^3, the normal state heat capacity is fitted. We calculated the lattice contribution to the specific heat β = 0.613 mJ/mol K^4 and the electronic parameter (Sommerfeld's coefficient) γ = 90.47 mJ/mol K^2 from this. The Debye temperature is determined using the Debye model as Θ_D = (12π^4nR/5β)^1/3, where R is the universal gas constant, which is 8.314 J/mol-K, and n denotes the number of atoms in the compound (n = 17). The value of Θ_D is thus calculated to be approximately 377 K, which agrees with the previous measurement <cit.>. Figure <ref>a displays the low-T electronic specific heat C_e that was produced after the phonon contribution was taken into account. The heat capacity jump at T_C (Δ C_e/γ T_C) is calculated to be 1.2, which is less than 1.43 the value expected for a weak-coupling BCS superconductivity. The fit to the exponential temperature dependency of C_e(T) yields Δ(0) = 0.40 meV, which is close to the 0.45 meV value obtained from the TF-μSR data analysis (discussed in section-B). Thus, the value of 2Δ(0)/k_BT_C = 2.9, which is less than the 3.53 anticipated for weak-coupling BCS superconductors. However, the linear fitting shown in Figure <ref>b shows that this material exhibits BCS behavior with a single isotropic gap. §.§ Superconducting Gap Structure: TF-μSR The pairing mechanism and superconducting gap structure of the LaOs_4As_12 were investigated by TF-μSR experiments down to 0.3 K. The TF-μSR asymmetry time spectra in the presence of 20 mT and 50 mT applied magnetic fields at above and below T_C are shown in Figures <ref>a–d. Because of the extra inhomogeneous field distribution of the vortex lattice generated inside the superconducting mixed state of LaOs_4As_12, the spectrum in Figure <ref>a,c in the superconducting state at 0.3 K demonstrate a greater relaxation. Using the Gaussian damped decay function, the asymmetry spectra were fitted <cit.> using the following equation, A_TF(t) = A_scexp(-σ_TF^2t^2/2)cos(γ_μB_sct+ϕ) + A_bgcos(γ_μB_bgt+ϕ). The gyromagnetic muon ratio is γ_μ/2π = 135.53 MHz/T, and the initial asymmetries of muons stopping on the sample and on the silver holder are A_sc and A_bg, respectively (constant across the entire temperature range). The local fields B_sc and B_bg represent muons stopping on the sample and on the sample holder, respectively, whereas ϕ represents initial phase value and σ_TF represents the Gaussian depolarization rate. We calculated the values of A_sc = 76% and A_bg = 24% of the total asymmetries by fitting 0.3 K data. When additional temperature data were analyzed, A_bg was kept constant and A_sc was found nearly temperature independent. The emergence of bulk superconductivity is indicated by an increase in the σ_TF rate as the system approaches the superconducting state. With the use of the following formula, the superconducting contribution to the relaxation σ_sc was determined, σ_sc = √(σ_TF^2-σ_nm^2), where the nuclear magnetic dipolar contribution, is denoted by the symbol σ_nm, which is derived from high-temperature fits and is temperature independent. Figure <ref>e depicts the temperature dependence of σ_sc in several applied TF fields. Due to low H_c2 value, as seen in Figure <ref>f, σ_sc depends on the applied field. Brandt demonstrated that the London penetration depth λ_L(T) is linked to σ_sc for a superconductor with H_ext/H_c2 ≤ 0.25 <cit.>. σ_sc[μ s^-1] = 4.83 × 10^4(1-H_ext/H_c2) ×{1+1.21[1-√((H_ext/H_c2))]^3}λ_L^-2[nm]. This relationship has been used to compute the temperature dependency of λ_L(T). As demonstrated in Figure <ref>f, isothermal cuts perpendicular to the temperature axis of σ_sc data sets were utilized to estimate the H-dependence of the depolarization rate σ_sc(H). The normalized λ_L^-2(T)/λ_L^-2(0) temperature variation, which is directly proportional to superfluid density, is shown in Figure <ref>a. The data were fitted using the following equation <cit.>: σ_sc(T)/σ_sc(0) = λ_L^-2(T)/λ_L^-2(0) = 1 + 1/π∫_0^2π∫_Δ(T)^∞(δ f/δ E) ×EdEdϕ/√(E^2-Δ(T,ϕ))^2, where f = [1+exp(E/k_BT)]^-1 is the Fermi function. We take Δ_k(T,ϕ) = Δ(T)g_k(ϕ), where we assume a temperature dependence that is universal Δ(T) = Δ_0 tanh[1.82{1.018(T_C/T-1)}^0.51]. The magnitude of the gap at 0 K is Δ_0, and the function g_k denotes the gap's angular dependence, which is equal to 1 for one isotropic energy gap s, 1 for two isotropic s+s wave energy gap and cos(2ϕ) for d-wave gap, where ϕ is the azimuthal angle along the Fermi surface. Figure <ref>a illustrates our comparison of three distinct gap models: employing a single isotropic s-gap wave, a multigap s+s-wave gap, and a nodal d-wave gap. As seen in the figure, the superfluid density saturates at low temperatures, which is a characteristic of the s-wave model with a single gap. An isotropic single-band s-wave model with a gap value of 0.45 meV provides the best representation of the data, with a gap to T_C ratio 2Δ(0)/k_BT_C = 3.26, which is less than the BCS weak-coupling limit (=3.53). On the other hand, the substantial rise in the χ^2 value puts the d-wave model and s+s-wave (multigap) model inappropriate for this system. A two-gap s+s-wave model of multiband superconductivity has been shown to be compatible with the temperature dependence of magnetic penetration depth of LaRu_4As_12. The higher gap to T_C ratio computed in the s + s-wave scenario, 2Δ_1(0)/k_BT_C = 3.73, is fairly comparable to the value of 3.53 for BCS superconductor in case of LaRu_4As_12 <cit.>. For LaRu_4As_12, 2 K specific phonon modes exhibit modest softening when compared to 20 K, demonstrating that the electron–phonon interactions causing the superconductivity have an audible impact on the vibrational eigenstates <cit.>. Using McMillan's relation, it is also possible to determine the electron–phonon coupling constant (λ_e-ph) <cit.>: λ_e-ph = 1.04+μ^*ln(Θ_D/1.45T_C)/(1-0.62μ^*)ln(Θ_D/1.45T_C)-1.04. where μ^* is the repulsive screened Coulomb parameter usually assigned as μ^* = 0.13. The calculated value of the λ_e-ph is 0.534. The London model is described as λ_L^2=m^*c^2/4π n_s e^2. It connects the effective mass enhancement m^* [=(1+λ_e-ph)*m_e], superconducting carrier density n_s [=m^*c^2/4π e^2λ_L(0)^2], and London penetration depth. By employing the s-wave model, we determined the London penetration depth of λ_L(0) = 168 nm. The effective mass enhancement is calculated to be m^* = 1.53 m_e, and the superconducting carrier density is predicted to be n_s = 1.53 × 10^27 carriers m^-3. References <cit.> include a description of the computations in detail. The calculated values of ,n_s = 8.6 × 10^27 carriers m^-3 and m^* = 1.749 m_e for LaRu_4As_12 <cit.>. The fitted parameters for LaOs_4As_12 and LaRu_4As_12 (for comparison) are shown in Table <ref>. To explain the observed nature of the superconducting gap structures, it is important to comprehend the electronic structures of these compounds, which have been carried <cit.> and the results suggest that the single-band order parameter in LaOs_4As_12 seems to be associated with the hybridized As-p and Os-d electronic character of the Fermi surface. On the other hand, the lack of hybridization for the disjointed Fermi surface of LaRu_4As_12, may explain its multiband superconducting nature. §.§ Preserved Time Reversal Symmetry: ZF-μSR In order to determine if there is a spontaneous magnetic field present in the superconducting ground state, we conducted the ZF-μSR experiment. Figure <ref>b shows the time evolution of the asymmetry spectra for T = 0.3 K < T_C and T = 3.5 K > T_C. The ZF-μSR spectra recorded in the normal and superconducting states show the same relaxations that can be found in overlapping ZF-μSR spectra, indicating that the superconducting state does not shows any spontaneous magnetic field or spin fluctuations. This result suggests that the time-reversal symmetry is preserved in LaOs_4As_12 superconducting state. The strong resemblance of the ZF-μSR spectra (above and below T_C) suggests that the time-reversal symmetry is also retained in the superconducting state of LaRu_4As_12. In order to fit the ZF data, a Lorentzian function was used <cit.>, G_ZF(t) = A_sc(t)exp(-λ_ZF t)+A_bg, where λ_ZF is the electronic relaxation rate, A_sc stands for the sample asymmetry, A_bg for the constant nondecaying background signal. The red line in Figure <ref>b indicates the fits to the ZF-μSR data. The ZF-μSR asymmetry data fitting parameters are λ_ZF = 0.754(4) μs^-1 at 0.3 K and λ_ZF = 0.744(5) μs^-1 at 3.5 K. No conclusive evidence of TRS breaking can be found since the relaxation rate change is within the error bar. § SUMMARY We employed TF-μSR to determine the gap symmetry of the superconducting state of LaOs_4As_12. An isotropic BCS-type s-wave gap model explains the temperature dependence of the superfluid density. The gap to T_C ratio, which was determined from the s-wave gap fit to the superfluid density, is 3.26; nonetheless, this is smaller than 3.53 expected for conventional BCS systems. The ZF-μSR spectra at 0.3 K and 3.5 K are strikingly similar, indicating that the time-reversal symmetry is intact. These results open up the possibility of using the compounds LaRu_4As_12 and LaOs_4As_12 as special research platforms for investigating filled skutterudites for the interplay between single- and multiband superconducting order parameters in conventional systems. §.§ Acknowledgements We thank T. Cichorek and J. Juraszek for providing LaOs_4As_12 sample and the ascii heat capacity data. We would like to thank T. Cichorek, P. P. Ferreira, R. Lucrezi, J. Juraszek, C. Heil and L. T. F. Eleno for interesting discussions. AB expresses gratitude to the Science and Engineering Research Board for the CRG Research Grant (CRG/2020/000698 & CRG/2022/008528) and CRS Project Proposal at UGC-DAE CSR (CRS/2021-22/03/549). DTA appreciates the support provided by the Royal Society of London for the Newton Advanced Fellowship between the UK and China, the International Exchange between the UK and Japan, and EPSRC-UK (Grant number EP/W00562X/1). We thanks the ISIS Facility for the beam time, RB1520431 <cit.>. apsrev4-1
http://arxiv.org/abs/2307.11593v2
20230711134758
Towards a unified language in experimental designs propagated by a software framework
[ "Emi Tanaka" ]
cs.OH
[ "cs.OH", "q-bio.QM", "stat.ME" ]
unicodehyperref hyphensurl dvipsnames,svgnames,x11namesxcolor Scale=MatchLowercase []Ligatures=TeX,Scale=1 upquote.sty microtype.sty [protrusion]basicmath ifundefinedKOMAClassName parskip.sty parskip=half [1]#1 HighlightingVerbatimcommandchars= {} Shaded @noskipsec footnotehyper.sty longtable @nat@width>@nat@width @nat@height>@nat@height Ginwidth=,height=,keepaspectratio @figurehtbp ifpackageloadedcaption ifpackageloadedfloat ruled ifundefinedc@chaptercodelistinghlopcodelistinghlop[chapter] codelistingListing ifpackageloadedcaption ifpackageloadedsubcaption ifpackageloadedtcolorbox ifundefinedshadecolor ifpackageloadedtikz bookmark.sty xurl.sty same Towards a unified language in experimental designs propagated by a software framework Emi Tanaka 0000-0002-1455-259X Biological Data Science Institute Australian National University Canberra mailto:[email protected]@anu.edu.au =============================================================================================================================================================== Experiments require human decisions in the design process, which in turn are reformulated and summarized as inputs into a system (computational or otherwise) to generate the experimental design. I leverage this system to promote a language of experimental designs by proposing a novel computational framework, called “the grammar of experimental designs”, to specify experimental designs based on an object-oriented programming system that declaratively encapsulates the experimental structure. The framework aims to engage human cognition by building experimental designs with modular functions that modify a targeted singular element of the experimental design object. The syntax and semantics of the framework are built upon consideration from multiple perspectives. While the core framework is language-agnostic, the framework is implemented in the R-package. A range of examples is shown to demonstrate the utility of the framework. Keywords • grammar of experimental designs design of experiments comparative experiments interface design grammarware Shaded[frame hidden, sharp corners, interior hidden, borderline west=3pt0ptshadecolor, boxrule=0pt, breakable, enhanced] sec-intro § INTRODUCTION Experimental designs offer a rigorous data collection protocol that seeks to achieve pre-defined objectives by imposing purposeful choices and control over experimental variables. The process of deliberation on the final experimental design is just as important, if not more, to identify any potential issues that can be addressed prior to the execution of the experiment. The experimental design literature, however, is often product-oriented rather than process-oriented; in other words, the focus is on the end product (the validity or efficiency of the planned analysis for the final experimental design; or algorithmic aspects to generate the design) rather than the process to the final design. Similar sentiment dates back from decades ago (as echoed in, for example, David M. Steinberg and Hunter 1984b and its discussions in response) with recognition that deriving the experimental context (e.g. defining aims and selecting experimental factors) and communication are important for experimental planning in the real world. The experimental aim and variables may initially be ill-defined and require iterative refining. In constructing a valid and efficient experimental design, the experimental context is invaluable (see for examples, Bishop, Petersen, and Trayser 1982; Hahn 1984). However, this context can be either lost in dialogue or understood implicitly, and consequently, the full context is often not explicitly transcribed. The downstream effect of not explicitly transcribing the context can be large: misunderstanding of the context, loss of knowledge transfer, inappropriate experimental designs rendering the collected data meaningless for confirmatory analysis, or bad analysis that disregards some significant experimental context (e.g. prediction using a variable that was used to derive the response). If anything, investing in a carefully planned experiment will provide more value than an analysis that attempts to scavenge meaning from a botched up experiment. The experimental context, however, is often stripped away or of an afterthought in many experimental design software systems (Tanaka and Amaliah 2022) thereby providing less room for the users to dwell on possible broader concerns in the experimental design. Such software systems may be an artifact of viewing experimentation in terms of abstract mathematical models, which has the benefits of allowing recognition of common ground in distinct experiments (David M. Steinberg and Hunter 1984b), but at the cost of losing the context. No experiment is conducted without a person initiating the experiment. Multiple people with different expertise are typically involved in planning and executing an experiment but human communication is a complex process, let alone interdisciplinary communication that compounds the challenge in achieving a shared understanding (Winowiecki et al. 2011). David M. Steinberg and Hunter (1984a) specifically calls out the statisticians “by working to improve their interpersonal skills and by studying some of the literature by pschologists, anthropologists, and others concerning the interplay between technical and cultural change”. Communication strategies can be employed to form mutual understandings, however, these are not strict requirements for generating an experimental design and (for the better or for the worse) communications are largely left to the autonomy of each individual. This means that the process is subject to large variation that can ultimately affect the final experimental design and critically, the relevance and quality of the experimental data. Coleman and Montgomery (1993) proposed a systematic approach for organizing written documentation of plans for industrial experiments. David M. Steinberg and Hunter (1984b) claimed that continually asking questions about the theory underlying an experiment is important. These practices, and in more general, writing documentation and seeking alternative views, should be a routine practice in experiments (or any data collection activity in fact). However, in the absence of extrinsic motivation, we rely on individual's intrinsic motivation to adopt better practices. Fishbach and Woolley (2022) proposed that the causes of the intrinsic motivation are characterised by the direct association of the activity and goal. In experimental design, our ultimate goal is to collect experimental data that can be used as empirical evidence to satisfy the experimental aim. This goal can be achieved without any of the aforementioned practices. Consequently, better practices of experimental design require the consideration of factors to increase the motivation to adopt those practices. The main contribution of this article is a computational framework for constructing an experimental design based on a declarative system that encapsulates experimental structures in a human-centered interface design, with justification of the framework from multiple perspectives. The core framework exposes the intermediate processes that make up the final experimental design, using a cognitive approach that possibly addresses some aforementioned challenges. Section <ref> provides some background and defines terminology to explain the proposed framework described in Section <ref>. Section <ref> demonstrates the utility of the framework using illustrative examples and Section <ref> concludes with a discussion. sec-background § BACKGROUND In this section, I outline some concepts, many of which transcends the field of experimental design that are relevant to the proposed framework presented in Section <ref>. sec-grammar §.§ Grammarware A grammar combines a limited set of words under shared linguistic rules to compose an unlimited number of proper sentences. In information technology, computational objects governed by a set of processing rules constitute a grammar. Klint, Lämmel, and Verhoef (2005) coined the term “grammarware” to refer to grammar and grammar-dependent software from the perspective of engineering. Some examples of grammarware used prominently in statistics are described next. Wilkinson (2005) proposed the concept of “the grammar of graphics” as an object-oriented graphics system that declaratively builds quantitative graphics by specifying relatively modular components (data, statistical transformation, scale, coordinate system, guide and graphical layers that include information about graphical primitives and mapping of data variables to aesthetic attributes), assembling a scene from specifications stored as an object in a tree structure, and then displaying it by translating the assembled object into a graphical device. The most popular interpretation of the grammar of graphics is the ggplot2 package (Wickham 2016) in the R language (R Core Team 2020), but variations exist in other languages as well, such as Gadfly (Jones et al. 2018) in Julia (Bezanson et al. 2017) and plotnine (Kibirige et al. 2022) in Python (Van Rossum and Drake 2009). The realization of the grammar of graphics aids users to flexibly build unlimited graphs from a limited set of “words” (functions). Another grammar is Structured Query Language (SQL), which is a declarative language used to query and manipulate data. SQL statements include special English keywords (e.g. select, inner join, left join, where, and group by) to specify the query in the identified database. SQL statements can include nested queries such that the result of the previous query is piped into the next query. A similar language was employed in the dplyr package (Wickham et al. 2022) in R, referred to as “the grammar of data manipulation” by the authors. The core functions in dplyr require both the first input and output to be objects of the class data.frame (i.e., data in a tabular format), which allows functions to be easily piped in a fashion similar to nested queries in SQL. Each function is designed to perform a single task. The function names correspond to English words, similar to the keywords in SQL. The widespread use of these declarative languages is perhaps a testament to the usefulness of these approaches. For more details and examples, readers are recommended to look at the vignettes and documentation of the packages. sec-comm §.§ Communication Strategies An experiment is a human endeavour that generally involves more than one person. Successfully completing an experiment typically hinges on the communication between multiple people with their own expertise. Let us consider a scenario where four actors are involved in an experiment: * the domain expert who drives the experimental objective and has the intricate knowledge of the subject area, * the statistician who creates the experimental design layout after taking into account statistical and practical constraints, * the technician who carries out the experiment and collects the data, and * the analyst who analyses the experimental data and help interpret it. The actors are purely illustrative and in practice, multiple people can take on each role, one person can take on multiple roles, and a person is not necessarily a specialist in the role assigned (e.g. the role of the statistician can be carried out by a person whose primarily training is not in statistics). The statistician and analyst may be the same individual but the roles are explicitly differentiated to signal that this is not always the case. All roles can be performed by a single individual. The scenario can begin with the domain expert coming up with a hypothesis or question and recruiting a statistician to help design the experiment. Before a statistician can produce the design layout, they must converse with the domain expert to understand the experimental objective, resources, practical constraints and other possible nuances that might influence the outcome of the experiment. There may be several communications before reaching a shared understanding. The statistician produces the final experimental design along with an analysis plan. Once the design layout is produced, these may be passed to a technician to carry out the experiment as per intended and collect the data. The analyst then extracts information, perhaps using the analysis plan by the statistician, from the collected data with the help of the domain expert for the interpretation. Each actor plays a vital role in the experiment; if even one actor fails in their role, then the whole experiment could be in jeopardy, and in the worst case scenario, resources go to complete waste. Even in this simple scenario, we can see that there are many possible interactions between people with every chance of “human error” in the communication. How might we improve this interdisciplinary communication? Bracken and Oughton (2006) highlighted the importance of language in interdisciplinary research and insisted interdisciplinary projects must allocate time to develop shared vocabularies. Winowiecki et al. (2011) employed scenario building techniques as a tool for interdisciplinary communication to promote structured dialogue to brainstorm particular aspects of the problem. Ideally, we would like to employ a systematic approach that abstracts the problem (and the solution) into a shared understanding. Not all experiments involve more than one person. In the special case where only a single individual is involved, intra-personal communication to internalize their experimental understanding must still take place, and externalizing this understanding by transcribing or otherwise is still important for the future self and others that wish to validate the experimental data. Indeed, Nickerson (1999) conjectures reflection on one's own knowledge and evaluation or justification of one's views as some possible countermeasures to overimputing one's knowledge to others, thus mitigating misunderstandings. sec-ed §.§ Terminologies in Experimental Design The field of experimental design is large, and its domain application (e.g., biology, psychology, marketing, and finance) also large. Numerous terminologies are used to describe various aspects or components of the experiment. Some terms apply only to particular domains; therefore, their meaning is not commonly understood across domains; e.g., stimuli are often treatments in behavioural science; cluster and block can be used interchangeably – the former term is more likely used in clinical trials. Terms like experimental unit (smallest unit that independently receives the treatment), observational unit (smallest unit in which the measurement is recorded on) and treatments (a set of conditions allocated to experimental units) are perhaps more universally understood. In a comparative experiment, a control usually refers to the treatment level that may be the baseline for comparison with other treatment levels (a placebo is a common control in pharmaceutical experiments). A replication (of a treatment level) typically refers to the number of times the treatment level is tested. For an overview, see Bailey (2008), Lawson (2015), Montgomery (2020), or other books on experimental design. Some terms are used to describe a complete experimental design (e.g., randomised complete block design, balanced incomplete block design, and split-plot design) with limited parameters, such as the number of treatments and replications. These “named” designs are handy to succinctly describe the experimental structure, but it can create a barrier to understanding the experimental structure if you are unfamiliar with it (e.g. do you know what a beehive design is? For those curious, see F. B. Martin 1973). The experimental structure can be divided into two main substructures: the unit structure and the treatment structure. The unit structure for a completely randomized design is unstructured. A randomized complete block design has a unit structure in which experimental units are nested within blocks. A factorial design is a design in which there is more than one set of treatment factors, where the combination of the treatment levels across those factors compose the whole set of treatments; in such a case, we say that the treatment has a factorial structure. A fractional factorial experiment is an experiment in which only a subset of treatment factor combinations is observed. In industrial experiments, experimental factors are largely classified into control (or primary) factor, constant factor, and nuisance factor (Coleman and Montgomery 1993; Viles et al. 2008). The control factors here are equivalent to the treatment factors. The constant factors are those that are maintained at the same level throughout the experiment, and nuisance factors are those that cannot be controlled. A run typically refers to a complete replicate of an experiment. The terminology in experimental design is certainly diverse. The presented terms thus far represent only a fraction of terms used. This complicates any notion of building a “unified language” to form a common understanding. sec-framework § THE GRAMMAR OF EXPERIMENTAL DESIGNS In an object-oriented programming (OOP) system, the objects are basic (and relatively modular) components of the system that contain data and code. The grammar of experimental designs, referred simply as “the grammar” henceforth, is a computational framework that employs the OOP system that considers experimental design as a working object that users progressively build by encapsulating the experimental structure declaratively by defining basic experimental components. This section describes the external abstraction of the framework and its contrast to other systems. The application of the grammar is shown in Section <ref>. components-of-the-grammar §.§ Components of the Grammar As discussed in Section <ref>, the terminology for experimental design is diverse. In forming the grammar, we must formulate objects and their methods such that they are relatively modular building blocks for the final experimental design (see Section <ref> for other grammarwares). The guiding principles for determining the components of the grammar are that the terms have to be: enumi. * meaningful to a diverse set of people, * reflective of fundamental actions, thoughts or factors in experiments, and * atomic (i.e., cannot be inferred from the composite of other terms). In the grammar, we describe terms fundamentally by considering every categorised entity (physical or otherwise) that may be (directly or indirectly) linked to the experimental unit to be a factor. Every factor in the system is assigned an explicit role that is stored as a class. The three primary roles of a factor, as defined in Table <ref>, are treatment, unit and record. The treatment and unit are encoded as separate classes as these are always semantically distinguished in a comparative experiment. A nuisance (or uncontrollable) factor or any responses can be encoded as a record class. Under the abstraction in Table <ref>, factors such as blocks, clusters, experimental units, observational units, and experimental run are all just units. Arguably, the small finite number of classes makes it easier to form a shared understanding and limits the introduction of jargon. The grammar uses the relational links between factors to infer other roles of the factor as described next. tbl-explicit-role []@ >p(- 4) * 0.1538 >p(- 4) * 0.4231 >p(- 4) * 0.4231@ Definition of explicit roles in the grammar with some examples. The three roles are to some degree characterised by the level of control by the experimenter. [b]Role/Class [b]Definition [b]Examples [b]Role/Class [b]Definition [b]Examples treatment A factor that is of primary interest and under complete control by the experimenter. Vaccine in vaccine trials. Drug in pharmaceutical experiments. Variety in plant improvement programs. unit Any categorised entity (physical or otherwise) that is under some control by the experimenter. Patient in clinical trials. Block in glasshouse experiments. Time in longitudinal experiments. Spatial index (e.g. row and column) in crop field trials. record An observed or uncontrollable factor in the experiment. Responses from observational units. Traits like sex, gender, height, age, and so on of an individual (note some of these may be used as a blocking factor, therefore should be units in that instance). The relationship between factors assigns an implicit role; e.g., if a treatment factor is allocated to a plot factor, then the plot is an experimental unit. The implicit roles are summarized in Table <ref>. Users are not required to be explicit about the implicit roles, instead they are required to be explicit about the relationships of factors. tbl-implicit-role []@ >p(- 6) * 0.2500 >p(- 6) * 0.2500 >p(- 6) * 0.2500 >p(- 6) * 0.2500@ Implicit roles based on the relationship between factors. [b]Explicit role of A [b]Explicit role of B [b]A – B relationship [b]Implicit role for B [b]Explicit role of A [b]Explicit role of B [b]A – B relationship [b]Implicit role for B unit unit B is nested in A Nested unit treatment unit B is applied to A Experimental unit record unit B is measured on A Observational unit In the grammar, experimental designs are considered objects with two forms: a graph form or a tabular form. The graph form represents an intermediate construct of an experimental design as a pair of directed acyclic graphs (DAGs) representing the high-level and the low-level relationships (referred to as a factor graph and a level graph, respectively). More specifically, in the factor graph, the nodes are factors and the edges are high-level relationships, while in the level graph, the nodes are levels and the edges are the low-level relationships. The direction of the edges specifies the hierarchy between the nodes. An example of the graph form is shown in Figure <ref>. The tabular form represents the final version of the experimental design in a rectangular array where rows are the smallest observational units and the columns are the variables or factors. This tabular form, referred to as the design table, is a typical output of an experimental design software. The grammar begins with the initialization of the experimental design object with an empty graph form. The user then declaratively manipulates the object based on a small number of functions, as shown in Figure <ref>. The main actions are to either set the scene (factors in the experiment), allot a factor to another factor, or assign the levels to other levels algorithmically. The actions are concurrently specified with the subject (primary roles); therefore, it is immediately clear from the syntax which element of the experimental design object is targetted. The actions, allot and assign, are made distinct as the former is usually made explicit in dialogue and the latter is almost always algorithmically derived. This concrete syntax may be altered based on the domain specific language (as demonstrated later with the R language in Section <ref>). The object builds up information on the experiment as the users specify the factors and their relationships. When a user completes their specification, then they can signal the conversion of the graph form to a tabular form. At this stage, if the specification is valid (nodes in the level graph can all be linked to one row), then it will render the design table. It should be noted that not all experiments are comparative, i.e., some experiments can have no treatment factors. The grammar does not require specification of treatment factors although at the minimum requires units to be specified. differences-to-other-systems §.§ Differences to Other Systems By treating an experimental design as a mutable object, the grammar allows a bi-directional interaction between the user and the object, allowing users to inspect and progressively build the experimental design. This bidirectional interaction is in contrast to many systems that consider only unidirectional interactions, as illustrated in Figure <ref>, where the major action of the user is to specify a complete experimental design with no recourse to think about individual components of the experiment. Another key difference between the grammar and conventional approaches for the computational generation of an experimental design, as illustrated in Figure <ref>, is that the grammar explicitly defines the experimental structure and output. This does not mean that the grammar cannot optimise the algorithmic assignment of the treatment to units; the user can substitute the corresponding step as they see fit. In this sense, the grammar is complementary to many existing experimental design algorithms. Furthermore, the grammar allows for various inputs that are fundamental to experiments in a cognitive manner. In other words, the grammar treats the specification of the experimental design as a structured dialogue. Consider a scenario where a statistician writes in their notes during the meeting with the domain expert where together they decide on the structure of the experiment. Under the conventional approach, when the statistician enters the structure into the computational system, the statistician has to reformulate this, generally void of the context, to fit the system. By contrast, the grammar is a more natural translation for the statistician to map their notes into the computational system. Indeed, the pre-design master guide sheet by Coleman and Montgomery (1993) suggests a number of elements (e.g. response and treatment factors) that should be captured in these notes that can be directly mapped in the grammar. The example in Section <ref> shows the difference in code between the systems to specify the experimental design. While the code is more verbose in the grammar, it should be clearer in communicating the context of the experiment. sec-examples § APPLICATIONS The grammar presented in Section <ref> necessitates some alterations when translated for a particular domain specific language. For brevity, the translation of the grammar to the R-package (Tanaka 2023) in order to fit the particular nuances of the R language and the user community is not described in this paper. This section aims to demonstrate the utility of the grammar. Instructive guide for the usage of the R-package is reserved for other avenues. The supplementary material shows the full design table outputs and further explanations of the code. In the following subsections, three examples of various flavours are shown to illustrate the grammar of experimental designs described in Section <ref>. Section <ref> demonstrates a comparison of different programming approaches to achieve the same end result. Section <ref> deals with a complex nested design showing how this can be specified using the grammar. Finally, Section <ref> shows an example where the system can be modified to deal with unbalanced cases. sec-classic §.§ Classic Split-Plot Design Consider the classical split-plot experiment introduced by Fisher (1950) where a land was divided into 36 patches, on which 12 varieties were grown, and each variety planted in 3 randomly chosen patches. Each patch was divided into three plots, with the plots randomly receiving either the basal dressing only, sulphate or chloride of potash. In constructing this experiment, the statistician may have first randomized the allocation of varieties to the patches with 3 replicates each and then permuted the 3 fertilizer levels to the plots within each patch. A random instance of this design is shown in Figure <ref>. The original experiment measured the yield of each plot. Hypothetically, the technician may also record the biomass for each patch. The construction of this design can follow in a procedural programming manner where the 12 varieties with 3 replicates are permuted, followed by replicating 36 times the permutation of 3 fertilizer levels. In the R language, this may be coded like below. There may be further wrangling to produce a design table. [] variety - c("V1", "V2", "V3", "V4", "V5", "V6", "V7", "V8", "V9", "V10", "V11", "V12") fertilizer - c("basal", "sulphate", "chloride") set.seed(1) # for reproducibility sample(rep(variety, each = 3)) # variety allocation replicate(36, sample(fertilizer)) # fertilizer allocation Alternatively, the structure of this design is well known as the “split-plot design”. The statistician may recognize the structure to this “named” design, and generate this design via a functional programming approach where the function name relates to the name of the design. Below, we used the function from the R-package (de Mendiburu 2021). Only two sets of treatment factors are expected in a split-plot design, which is reflected in the input parameter names and . Notice that it is not immediately clear without further interrogation which treatment factor is applied to the patches or the plots; in fact, the units need not be defined. [] agricolae::design.split(trt1 = variety, trt2 = fertilizer, r = 3, seed = 1) In the grammar, the design is progressively defined using a series of composable operations as shown below. annotated-cell-3 [] library(edibble) des1 - design("Fisher's split-plot design") %% 1 set_units(patch = 36, 2 plot = nested_in(patch, 3)) %% set_trts(variety = 12, 3 fertilizer = c("basal", "sulphate", "chloride")) %% set_rcrds(yield = plot, 4 biomass = patch) %% allot_trts(variety ~ patch, 5 fertilizer ~ plot) %% assign_trts(seed = 1, 6 order = c("random", "random")) %% serve_table() 7 1 The design object is initialised with an optional title of the experiment. 2 The units and are defined. The has 36 levels while has 3 levels for each . 3 The treatments are with 12 levels and named as “basal”, “sulphate” and “chloride”. 4 The records in the data collection will be for each and the for each . 5 The treatments are allot to units. Specifically, to and to . 6 The treatments are then randomly assigned to corresponding unit specified in the allotment. The is specified here so we can replicate the results. It recognises that the is nested in the and therefore uses this by default to constrain the order that the treatment is allocated. Specifically, the treatment order for both allotment are random. 7 In the last step, we convert the intermediate design object into the final experimental design table. See Table 1 of the Supplementary Material for the full design table. The Supplementary Material also shows the intermediate outputs and explanation of other functions not shown here. sec-complex §.§ Complex Nested Design Consider next the experiment in P. A. Martin, Johnson, and Forsyth (1996) aimed to investigate if insecticides used to control grasshoppers affected the weight of young chicks of ring-necked pheasants, either by affecting the grass around the chicks or by affecting the grasshoppers eaten by the chicks. A description and illustration of the experiment is in Figure <ref>. Another random instance of the design in Figure <ref> is specified in the grammar as follows. annotated-cell-4 [numbers=left,,] des2 - design("Complex nested factorial design") %% set_trts(insecticide = 3, 1 dose_level = c("low", "high"), food_type = c("sprayed", "unsprayed")) %% set_units(week = 3, 2 strip = nested_in(week, 3), swath = nested_in(strip, 2), pen = nested_in(swath, 2), chick = nested_in(pen, 6)) %% allot_trts(insecticide ~ strip, 3 dose_level ~ swath, food_type ~ pen) %% assign_trts(seed = 1) %% serve_table() 1 Here the treatment is defined first with 3 levels of insecticide, two dose levels (low and high) and two food types (sprayed or unsprayed). 2 The units are defined next. The experiment is run over 3 weeks. For each week, there are 3 strips used. Each strip is split into two swathes. Each swath has two pens. Each pen contains 6 chicks. 3 Next we define the allotment of treatments to units. The insecticide is alloted to strip, the dose level to swath and the food type to pen. See Table 2 of the Supplementary Material for the full design table. sec-unbalanced §.§ Unbalanced Factorial Design Previous examples have conveniently used equal numbers of replicates for each treatment, however, this is often not the case in practice. The proposed system can cater for experiments with an unbalanced number of treatments. Suppose we consider the first four motion sickness experiments reported by Burns (1984). The study, as shown in Figure <ref>, was a collection of separate experiments. In this sense, the treatment (acceleration and frequency) was pre-assigned and completely confounded with the experiment. This unbalanced design in Figure <ref> is specified in the grammar as: annotated-cell-5 [numbers=left,,] des3 - design("Motion sickness incidence") %% set_units(experiment = 4, 1 subject = nested_in(experiment, 1 ~ 21, 2 ~ 20, 3 ~ 29, 4 ~ 59)) %% set_trts(frequency = c(0.167, 0.250), 2 acceleration = c(0.111, 0.222)) %% allot_trts(frequency:acceleration ~ experiment) %% 3 assign_trts(order = "systematic") %% 4 serve_table() 1 We specify that there are 4 experiments. Experiments 1, 2, 3 and 4 had 21, 20, 29 and 59 subjects, respectively. 2 There were two treatment factors: frequency with two levels (0.167 and 0.250) and acceleration with two levels (0.111 and 0.222). 3 The combination of the treatment factors are assigned to each experiment. 4 The allocation of the treatment is systematic. See Table 3 of the Supplementary Material for the full design table. sec-discuss § DISCUSSION Multiple people with different expertise are typically involved in planning and executing an experiment but communication is rarely easy or seamless, especially across people from different domains. In designing experiments, we ought to consider the time (Bracken and Oughton 2006) and methods, such as structured dialogues (Winowiecki et al. 2011), to form a shared understanding. A unified language in experimental designs will aid in rapidly fostering mutual understanding among involved parties. In this paper, I propose to leverage the design of the software interface to promote a standardized grammar that govern the expression of experimental designs in a structured approach. A new framework, called “the grammar of experimental designs”, was presented as a process-based tool. The primary novel aspect of this framework is that an experimental design is treated as a mutable object that is progressively altered based on the explicit specifications of fundamental experimental components. This approach exposes the intermediate process to constructing the final experimental design, thus providing a greater opportunity to notice any broader concerns in the experimental designs. This in turn can encourage the investigation or remedy of the experimental plan before its execution. A number of functionalities are not discussed or demonstrated in this paper in order to focus on the general framework rather than on specific features. These functionalities include the specification of intended observational records (including responses) of units; embedded data validation for data entry; simulation of observational records; diagnostics and visualization of designs. Abstract syntax and internal object representation are also only briefly discussed. These functionalities and internals warrant full discussion in separate papers. Furthermore, an extended explanation of the package will be presented in other avenues. The framework does not address all possible experimental structures but extensions of the framework, such as situations with an undetermined number of levels or complex conditional structures, can be envisioned as future research directions. This framework may be compelling for several reasons, some of which have been outlined previously. First, explicit specification raises the cognitive awareness of the experimental context and intention for both the user and the reader. Second, it encourages encoding of information as a machine-readable data, thereby allowing for further interrogation, manipulation or even exportation to multiple formats. Third, it allows for the partial specification of the experimental structure and permits the reuse of the structure. A recipe approach is often used for existing software to generate randomized designs. A recipe or a named design describes an end product and does not permit different processes to reach to a similar end product. The grammar requires users to describe a particular course of actions, thereby intentionally directing users to be explicit. This way the software does not hinder the ability for users to encode more information. The proposed framework is purposefully designed such that it can be extended and improved by other developers. For example, the assignment of treatments (to units) can be substituted with alternative methods. Arguably this step is the most algorithmically difficult part of the process, and is the subject of many experimental design research. The default assignment is currently simplistic. There will be many cases in which the default system will not be suitable or is highly inefficient. The goal of the grammar, however, is not to generate the most efficient or optimal design for every experimental structure, which is an impossible feat without user guidance. The goal of the grammar is to standardize the specifications of the experimental structure so that we can more easily form a shared understanding. As any other language, the grammar of experimental designs has the potential to evolve. In principle, the framework promotes good practice by requiring an explicit specification of the elements of the experimental design. However, principle alone is not sufficient to encourage mass adoption. There are several possible extensions that make the framework attractive despite its verbose specifications. These include immediate benefits such as ease of adding data validation and automated visualization – both of which are the subject of future papers. Fishbach and Woolley (2022) suggested that immediate benefits can increase intrinsic motivation. My hope is that these downstream features will eventuate in the mass adoption of the framework, or even a similar framework, which aids in the transparency of the experimental design process. We all gain from better experimental practices. It is in this mass adoption, where we come to share a unified language in experimental designs, that I believe will aid in communication and result in the collective adoption of better experimental designs. The practice of experimental design requires holistic consideration of the total experimental process, including that of psychological processes that translate to practice. supplementary-material § SUPPLEMENTARY MATERIAL The supplementary material contains the full design table outputs from the examples in Section <ref> along with further explanations of the code. acknowledgement § ACKNOWLEDGEMENT tocsectionAcknowledgement This paper uses (Xie 2015), (Xie, Allaire, and Grolemund 2018) and Quarto (Posit 2023) for creating reproducible documents. The code presented uses version 0.1.3 of the package available on CRAN. The latest development of can be found at <https://github.com/emitanaka/edibble>. references § REFERENCES tocsectionReferences refs 10 preref-Bailey2008-gw Bailey, Rosemary A. 2008. Design of Comparative Experiments. Cambridge University Press. preref-bezanson2017julia Bezanson, Jeff, Alan Edelman, Stefan Karpinski, and Viral B Shah. 2017. “Julia: A Fresh Approach to Numerical Computing.” SIAM Review 59 (1): 65–98. <https://doi.org/10.1137/141000671>. preref-bishopAnotherLookStatistician1982 Bishop, Thomas, Bruce Petersen, and David Trayser. 1982. “Another Look at the Statistician's Role in Experimental Planning and Design.” The American Statistician 36 (4): 387–89. preref-Bracken2006-rk Bracken, L J, and E A Oughton. 2006. “'What Do You Mean?' the Importance of Language in Developing Interdisciplinary Research.” Transactions 31 (3): 371–82. <https://doi.org/10.1111/j.1475-5661.2006.00218.x>. preref-burnsMotionSicknessIncidence1984 Burns, K. C. 1984. “https://www.ncbi.nlm.nih.gov/pubmed/6466248Motion Sickness Incidence: Distribution of Time to First Emesis and Comparison of Some Complex Motion Conditions.” Aviation, Space, and Environmental Medicine 55 (6): 521–27. preref-colemanSystematicApproachPlanning1993a Coleman, David E, and Douglas C Montgomery. 1993. “A Systematic Approach to Planning for a Designed Industrial Experiment.” Technometrics 35 (1): 1–12. preref-agricolae de Mendiburu, Felipe. 2021. Agricolae: Statistical Procedures for Agricultural Research. <https://CRAN.R-project.org/package=agricolae>. preref-fishbachStructureIntrinsicMotivation2022 Fishbach, Ayelet, and Kaitlin Woolley. 2022. “The Structure of Intrinsic Motivation.” Annual Review of Organizational Psychology and Organizational Behavior 9 (1): 339–63. <https://doi.org/10.1146/annurev-orgpsych-012420-091122>. preref-Fisher1950-hd Fisher, Ronald A. 1950. Statistical Methods for Research Workers. 11th ed. Oliver and Boyd. preref-hahnExperimentalDesignComplex1984 Hahn, Gerald J. 1984. “Experimental Design in the Complex World.” Technometrics 26 (1): 19–31. preref-daniel_c_jones_2018_1284282 Jones, Daniel C., Ben Arthur, Tamas Nagy, Shashi Gowda, Godisemo, Tim Holy, Andreas Noack, et al. 2018. “GiovineItalia/Gadfly.jl: V0.7.0.” Zenodo. <https://doi.org/10.5281/zenodo.1284282>. preref-hassan_kibirige_2022_7124918 Kibirige, Hassan, Greg Lamp, Jan Katins, gdowding, austin, Florian Finkernagel, matthias-k, et al. 2022. “has2k1/plotnine: See the [changelog](https://plotn ine.readthedocs.io/en/stable/changelog.html#v0-9-0 ).” Zenodo. <https://doi.org/10.5281/zenodo.7124918>. preref-Klint2005-iz Klint, Paul, Ralf Lämmel, and Chris Verhoef. 2005. “Toward an Engineering Discipline for Grammarware.” ACM Transactions on Software Engineering and Methodology 14 (3): 331–80. <https://doi.org/10.1145/1072997.1073000>. preref-Lawson2015 Lawson, John. 2015. Design and Analysis of Experiments with R. CRC Press. preref-martinBeehiveDesignsObserving1973 Martin, Frank B. 1973. “Beehive Designs for Observing Variety Competition.” Biometrics 29 (2): 397–402. <https://doi.org/10.2307/2529404>. preref-martinEffectsGrasshoppercontrolInsecticides1996 Martin, Pamela A., Daniel L. Johnson, and Douglas J. Forsyth. 1996. “Effects of Grasshopper-Control Insecticides on Survival and Brain Acetylcholinesterase of Pheasant ( Phasianus Colchicus ) Chicks.” Environmental Toxicology and Chemistry / SETAC 15 (4): 518–24. <https://doi.org/10.1897/1551-5028(1996)015%3C0518:EOGCIO%3E2.3.CO;2>. preref-montgomeryDesignAnalysisExperiments2020 Montgomery, D. 2020. Design and Analysis of Experiments. 10th ed. Wiley. preref-nickersonHowWeKnow1999 Nickerson, Raymond S. 1999. “How We Know—and Sometimes Misjudge—What Others Know: Imputing One's Own Knowledge to Others.” Psychological Bulletin 125 (6): 737–59. preref-quarto Posit. 2023. Quarto: An Open-Source Scientific and Technical Publishing System. <https://quarto.org/>. preref-R-base R Core Team. 2020. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. <https://www.R-project.org/>. preref-steinbergExperimentalDesignReview1984a Steinberg, David M., and William G. Hunter. 1984a. “[Experimental Design: Review and Comment]: Response.” Technometrics 26 (2): 128. <https://doi.org/10.2307/1268106>. preref-steinbergExperimentalDesignReview1984 Steinberg, David M, and William G Hunter. 1984b. “Experimental Design: Review and Comment.” Technometrics 26 (2): 71–97. preref-R-edibble Tanaka, Emi. 2023. edibble: Designing Comparative Experiments. <https://CRAN.R-project.org/package=edibble>. preref-Tanaka2022-hc Tanaka, Emi, and Dewi Amaliah. 2022. “Current State and Prospects of R-Packages for the Design of Experiments.” <https://doi.org/10.18637/jss.v096.i01>. preref-python Van Rossum, Guido, and Fred L. Drake. 2009. Python 3 Reference Manual. Scotts Valley, CA: CreateSpace. preref-vilesPlanningExperimentsFirst2008 Viles, E., M. Tanco, L. Ilzarbe, and M. J. Alvarez. 2008. “Planning Experiments, the First Real Task in Reaching a Goal.” Quality Engineering 21 (1): 44–51. <https://doi.org/10.1080/08982110802425183>. preref-ggplot2 Wickham, Hadley. 2016. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. <https://ggplot2.tidyverse.org>. preref-dplyr Wickham, Hadley, Romain François, Lionel Henry, and Kirill Müller. 2022. dplyr: A Grammar of Data Manipulation. <https://CRAN.R-project.org/package=dplyr>. preref-Wilkinson2005-oz Wilkinson, Leland. 2005. The Grammar of Graphics. 2nd ed. Springer. preref-Winowiecki2011-zx Winowiecki, Leigh, Sean Smukler, Kenneth Shirley, Roseline Remans, Gretchen Peltier, Erin Lothes, Elisabeth King, Liza Comita, Sandra Baptista, and Leontine Alkema. 2011. “Tools for Enhancing Interdisciplinary Communication.” Sustainability: Science Practice and Policy 7 (1): 74–80. <https://doi.org/10.1080/15487733.2011.11908067>. preref-knitr Xie, Yihui. 2015. Dynamic Documents with R and Knitr. 2nd ed. Boca Raton, Florida: Chapman; Hall/CRC. <https://yihui.org/knitr/>. preref-rmarkdown Xie, Yihui, J. J. Allaire, and Garrett Grolemund. 2018. R Markdown: The Definitive Guide. Boca Raton, Florida: Chapman; Hall/CRC. <https://bookdown.org/yihui/rmarkdown>.
http://arxiv.org/abs/2307.04615v1
20230710145809
Numerical quantification of the wind properties of cool main sequence stars
[ "Judy Chebly", "Julián D. Alvarado-Gómez", "Katja Poppenhäger", "Cecilia Garraffo" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.EP" ]
firstpage–lastpage : Multi-input Transformer for Age and Gender Estimation t]c@8emc Maksim Kuprashevich Irina Tolstykh [email protected] [email protected] Layer Team, SaluteDevices August 12, 2023 ============================================================================================================================================ As a cool star evolves, it loses mass and angular momentum due to magnetized stellar winds which affect its rotational evolution. This change has consequences that range from the alteration of its activity to influences over the atmosphere of any orbiting planet. Despite their importance, observations constraining the properties of stellar winds in cool stars are extremely limited. Therefore, numerical simulations provide a valuable way to understand the structure and properties of these winds. In this work, we simulate the magnetized winds of 21 cool main-sequence stars (F-type to M-dwarfs), using a state-of-the-art 3D MHD code driven by observed large-scale magnetic field distributions. We perform a qualitative and quantitative characterization of our solutions, analyzing the dependencies between the driving conditions (e.g., spectral type, rotation, magnetic field strength) and the resulting stellar wind parameters (e.g., Alfvén surface size, mass loss rate, angular momentum loss rate, stellar wind speeds). We compare our models with the current observational knowledge on stellar winds in cool stars and explore the behaviour of the mass loss rate as a function of the Rossby number. Furthermore, our 3D models encompass the entire classical Habitable Zones (HZ) of all the stars in our sample. This allows us to provide the stellar wind dynamic pressure at both edges of the HZ and analyze the variations of this parameter across spectral type and orbital inclination. The results here presented could serve to inform future studies of stellar wind-magnetosphere interactions and stellar wind erosion of planetary atmospheres via ion escape processes. exoplanets – stars: atmospheres – stars: magnetic fields – stars: mass-loss – stars: winds, outflows § INTRODUCTION For many decades, scientists have known that the Sun has a mass outflow, which is most visible in the behavior of comet tails (e.g., ). It has also been established that solar wind is a natural byproduct of the heating processes that produce the hot solar corona (T ∼ 10^6 K). As a result, all cool main-sequence stars (M_ ⩽ 1.3 M_⊙) with analogous hot coronae, evidenced from their measured X-ray properties (), should have similar winds (). Magnetic fields are thought to play a key role as an energy source for the corona and the expanding solar atmosphere (e.g., ). Recent theories have shown that in addition to magnetic fields, wave dissipation (via turbulence) and magnetic reconnection could also play a role in energizing and shaping the spatial properties of the solar wind (see, ). Winds, even if relatively weak, play an important role in stellar evolution for stars of different spectral types causing the star to lose angular momentum and slow its rotation over time (). As a result, the magnetic activities that constitute the space weather (i.e., stellar winds, flares, coronal mass ejections) will decrease with age in low-mass stars (). These changes in the host star will also affect the evolution of planetary atmospheres and habitability (). Direct measurements of the solar wind by spacecraft such as the Advanced Composition Explorer (ACE, ), Ulysses <cit.>, and Parker solar probe <cit.> have improved our knowledge and understanding of its properties. On the other hand, detecting a solar-like wind emitted by another star has proven extremely challenging. This is not surprising, given how difficult it is to observe the solar wind remotely. The latter carries a very low mass loss rate (Ṁ_⊙ = 2 × 10^-14 Ṁ_̇⊙̇  yr^-1, see ), which implies relatively low densities (near the heliopause: ∼ 0.002 cm^-3, ). Similarly, its high temperature and elevated ionization state, make it difficult to detect with simple imaging or spectroscopic techniques. As a result, properties such as the associated mass loss rates, angular momentum loss rates, and terminal velocities, crucial to understand stellar winds in low-mass stars, remain poorly constrained. Attempts to directly detect thermal radio emission from the plasma stream in cool stars have not yet led to any discovery (). Current radio telescopes are not optimized for this method; they can only detect winds much stronger than those from the Sun. Moreover, the coronae of these active stars are also radio sources, making it difficult to determine the exact source of the emission. Nevertheless, this method has been able to establish upper limits for solar analogs of 1.3 × 10^ -10 Ṁ_⊙ yr^ -1 (). Another proposed method for direct detection is to look for X-ray emission from nearby stars. As the star's winds propagate, they collide with the Local Interstellar Medium (ISM), forming "astrospheres" similar to the Sun's heliosphere <cit.>. The charge exchange between the highly ionized stellar wind and the ISM produces X-ray photons with energies ranging from 453 to 701 eV. However, this method was unable to detect circumstellar charge exchange X-ray emission even from the nearest star, Proxima Centauri <cit.>. Similar to the charge exchange X-ray emission method, the Ly-α absorption technique assumes the presence of the charge exchange phenomenon. In this case, however, we are interested in the neutral hydrogen wall formed at the astrospherical outer boundary by the interaction between the stellar wind and the ISM. This exchange has been detected as excess HI Ly-α absorption in Hubble Space Telescope UV stellar spectra <cit.>. With nearly 30 measurements to date, spectroscopic analyses of the stellar HI lines have proven to be the best method to unambiguously detect and measure weak solar-like winds as well as some evolved cool stars <cit.>. Using this method,<cit.> found evidence for some increase in Ṁ with magnetic activity, corresponding to a power-law relation in the form Ṁ∝ F_ X^ 1.34 ± 0.18 with F_ X < 10^ 6 erg cm^ -2 s^ -1. However, this relation does not seem to hold anymore for more active stars (F_ X > 10^ 6 erg cm^-2 s^-1), mainly M-dwarfs <cit.>. Recently, <cit.> established a power law (Ṁ∝ F_ X^ 0.77 ± 0.04) between the Ṁ per unit surface area and the X-ray surface flux for coronal winds for a broader selection of stars, including G, K, and new Ṁ estimates for M-dwarfs. They found that the relation breaks even for stars with F_ X < 10^ 6 erg cm^-2 s^-1 (e.g., GJ 436, which has F_ X = 4.9 × 10^ 4 erg cm^-2 s^ -1, where the Ṁ was estimated by using the planet as a probe for the stellar wind ) with the magnetic topology being a possible factor for the scatter. While extremely useful, the search for astrospherical absorption is influenced by a number of critical factors. For instance, this method is strongly dependent on the relative velocity of the stellar rest frame and the ISM flow velocity (V_ ISM). As well as on the angle, θ, between the upwind direction of the ISM flow and the line-of-sight to the star <cit.>. It also requires prior knowledge of the properties of the ISM such as the density and its ionization state (; ). Finally, its applicability is limited to relatively nearby stars (≲ 15 pc) due to the absorption of the ISM. Due to the scarcity of observational data and associated limitations, numerical simulations can be used to improve our understanding of stellar winds. Models based on Alfvén waves are more commonly used to simulate the stellar wind from stars other than the Sun <cit.>. This is because these waves are considered to be key mechanism for heating and accelerating the solar wind (; ). In this study, we present a detailed numerical characterization of the stellar wind properties of cool main-sequence stars (early F to M-dwarfs) covering a range of rotation rates and magnetic field strengths. We compute steady-state stellar wind solutions using a state-of-the-art 3D MHD model and provide consistent qualitative and quantitative comparisons. Our goal is to better understand the different stellar wind properties as a function of the driving parameters, allowing us to explore the expected stellar wind conditions in the circumstellar region around planet-hosting stars. This paper is organized as follows: Section <ref> describes the numerical model and properties of the selected stellar sample. In Sect. <ref>, we present our numerical results, discuss the derived trends in the stellar wind properties, and compare our results with observations. This information is then used to quantify the stellar wind conditions and explore their implications in the context of the classical habitable zone (HZ) around cool main-sequence stars. Conclusions and summary are provided in Sect. <ref>. § MODEL DESCRIPTION We simulate stellar winds in cool main-sequence stars using the state-of-the-art Space Weather Modeling Framework (SWMF; ). The SWMF is a set of physics-based models (from the solar corona to the outer edge of the heliosphere) that can be run independently or in conjunction with each other <cit.>. This model uses the numerical schemes of the Block Adaptive Tree Solar Roe-Type Upwind Scheme (BATS-R-US; ) MHD solver. For a detailed description of the model, see <cit.>. The multi-domain solution starts with a calculation using the Solar/Stellar Corona (SC) module which incorporates the Alfvén Wave Solar Model (AWSoM; ). This module provides a description of the coronal structure and the stellar wind acceleration region. The simulation is then coupled to a second module known as the Inner Heliosphere/Astrosphere[This module is formally labeled IH within the SWMF, but since we are working with low-mass main sequence stars, we will refer to it as the Inner Astrosphere (IA) domain.] (IA). In this way, it is possible to propagate the stellar wind solution up to Earth's orbit and beyond. The model has been extensively validated and updated employing remote sensing as well as in-situ solar data (e.g., ). AWSoM is driven by photospheric magnetic field data, which is normally available for the Sun in the form of synoptic magnetograms <cit.>. A potential field source surface method is used to calculate the initial magnetic field (more details in the following section). This information is used by AWSoM to account for heating and radiative cooling effects, as well as the Poynting flux entering the corona, and empirical turbulent dissipation length scales. With the interplay between the magnetic field distribution, the extrapolation of the potential field, and the thermodynamic properties, the model solves the non-ideal magnetohydrodynamic (MHD) equations for the mass conservation, magnetic field induction, energy (coronal heating), and momentum (acceleration of the stellar wind). These last two aspects are controlled by Alfvén waves propagating along and against the magnetic field lines (depending on the polarity of the field). In the momentum equation, the heat and acceleration contributions are coupled by an additional term for the total pressure and a source term in the energy equation. The numerical implementation is described in detail in <cit.>. Once these conditions are provided, the simulation evolves all equations locally until a global steady-state solution is reached. §.§ Simulation parameters and setup In our work, we apply the SWMF/AWSoM model to main-sequence F, G, K, and M-type stars by assuming that their stellar winds are driven by the same process as the solar wind. We analyze the properties of the stellar wind by a coupled simulation covering the region of the stellar corona (SC, spherical) and the resulting structure within the inner astrosphere (IA, cartesian). Figure <ref> illustrates the coupling procedure in one of our models. This coupling was necessary only in the case of F, G, and K stars, in order to completely cover the habitable zones (HZ)[The range of orbits around a star in which an Earth-like planet can sustain liquid water on its surface.], which are larger and farther away from the star. Parameters such as stellar radius (R_), mass (M_), and rotation period (P_ rot), are also taken into account in the simulations. We followed the approach in <cit.> in order to determine the optimistic HZs boundaries of each star in our sample. §.§.§ Simulation domain The star is positioned in the center of the SC spherical domain. The radial coordinate in SC ranges from 1.05 R_ to 67 R_, except for M-dwarfs, where it extends to 250 R_. The choice of the outer edge value of the SC domain was chosen in a way to obtain both edges of the HZ in one domain. The habitable zones limits were calculated using <cit.> approach and the reported measured L_ and T_ eff for each star in our sample (see Table <ref>). As will be discussed in Sect. <ref>, in the case of M-dwarfs, the extension had to be performed in order to cover the entire Alfvén surface (AS)[This structure sets the boundary between the escaping wind and the magnetically coupled outflows that do not carry angular momentum away from the star.], while keeping the default parameters for AWSoM fixed (see Sect. <ref>). The domain uses a radially stretched grid with the cartesian z-axis aligned with the rotation axis. The cell sizes in the meridional (ϕ) and azimuthal (θ) directions are fixed at ∼ 2.8 ^∘. The total number of cells in the SC domain is ∼ 8 × 10^5. The steady-state solutions obtained within the SC module are then used as inner boundary conditions for the IA component. An overlap of 5 R_ (from 62 R_ to 67 R_) is used in the coupling procedure between the two domains for F, G, and K stars (more details on the necessity of the overlap when coupling between domains can be found in ). The IA is a cube that extends from 62  R_ to 600 R_ in each cartesian component. Adaptive Mesh refinement (AMR) is performed within IA, with the smallest grid cell size of ∼ 1.17 R_ increasing up to 9.37 R_ with a total of 3.9 million cells. As the simulation evolves, the stellar wind solution is advected from SC into the larger IA domain where the local conditions are calculated in the ideal MHD regime. §.§.§ Magnetic boundary conditions In the initial condition of the simulation, observations are used to set the radial component of the magnetic field B_ r [G] anchored at the base of the wind (at the inner boundary). As mentioned earlier, a finite potential field extrapolation procedure is carried out to obtain the initial configuration of the magnetic field throughout SC <cit.>. This procedure requires setting an outer boundary (source surface, r_ s), beyond which the magnetic field can be considered to be purely radial and force-free. The magnetic field can therefore be described as a gradient of a scalar potential and determined by solving Laplace's equation in the domain. For the simulations discussed here, we set r_ s at 45% of the SC domain size for F, G, and K stars, and 70 % for M-dwarfs. While the choice of this parameter does not alter significantly the converged solutions, it can modify the required run time of each model to achieve convergence. Therefore, our selection was done to guarantee convergence to the steady-state in a comparable number of iterations between all spectral types. The stellar magnetic field as reconstructed from Zeeman Doppler Imaging (ZDI)[A tomographic imaging technique that allows the reconstruction of the large-scale magnetic field (strength and polarity at the star’s surface from a series of polarized spectra (see e.g., ).], is used as the inner boundary condition of SC (Fig. <ref>). Therefore, the resulting wind solutions are more realistic than models based on simplified/idealized field geometries <cit.>. Although the reconstructed maps provide the distribution of vector magnetic fields, we use only the radial component of the observed surface field. The magnetogram is then converted into a series of spherical harmonic coefficients with a resolution similar to that of the original map. The order of the spherical harmonics should be chosen so that artifacts such as the "ringing" effect do not appear in the solution <cit.>. In our models, we performed the spherical harmonics expansion up to l_ max = 5. §.§.§ Input parameters After we set the initial conditions, we define several parameters for the inner boundary. In order to reduce the degree of freedom of the parameter set, we only modify the parameters related to the properties of the stars, such as mass, rotation period, and radius. As for the other parameters, we implement the same values that are commonly used in the solar case (). The Poynting flux (S/B_ = 1.1× 10^6 J m^-2 s^-1 T) is a parameter that determines the amount of wave energy provided at the base of a coronal magnetic field line. The other parameter is the proportionality constant that controls the dissipation of Alfvén wave energy into the coronal plasma and is also known as the correlation length of Alfvén waves (L_⊥ = 1.5× 10^5 m √(T)). We use the values given in <cit.> to define the base temperature (T_ o = 2 × 10^6 K) and the base density (n_ o = 2× 10^11 cm^-3). We note that the choice of these parameters will affect the simulation results, as reported in several studies that followed different approaches (e.g., ). Recently, <cit.> performed a global sensitivity analysis to quantify the contributions of model parameter uncertainty to the variance of solar wind speed and density at 1 au. They found that the most important parameters were the phostospheric magnetic field strength, S/B_, and L_⊥. Furthermore, in <cit.>, an increase in the mass loss rate (Ṁ_), and angular momentum loss rate (J̇_) was reported when S/B_ is increased from the solar value to 2.0× 10^6 J m^-2 s^-1 T), which is expected because S/B_ drives the energy of the Alfvén wave, resulting in higher Ṁ_ and J̇_. In this work, however, we are interested in isolating the expected dependencies with the relevant stellar properties (e.g., mass, radius, rotation period, photospheric magnetic field) which can only be analyzed consistently if the AWSoM related parameters are kept fixed between spectral types. Moreover, as will be discussed in detail in Sect. <ref>, the results obtained using the standard AWSoM settings are either consistent with current stellar wind observational constraints for different types of stars or the apparent differences can be understood in terms of other physical factors or assumptions made in the observations. For these reasons, we have chosen not to alter these parameters in this study, which also reduces the degrees of freedom in our models. §.§ The sample of stars Our investigation is focused on main sequence stars, with effective temperatures ranging from 6500 K down to 3030 K, and masses M_ < 1.34 M_⊙ (spectral types F to M). All of these stars are either fully or partially convective. We use a sample of 21 stars whose large-scale photospheric magnetic fields were reconstructed with ZDI ( and references therein). Some of these stars were observed at different epochs. In this case, the ZDI map with the best phase coverage, signal-to-noise ratio, and most spectra used in the reconstruction was chosen. The sample includes radial magnetic field strengths in the ZDI reconstruction between 5 G and 1.5 kG corresponding to HD 130322 (K0V) and EV Lac (M3.5V), respectively. Spectral types range from F7 (τ Boo, M_ = 1.34 M_⊙, R_ = 1.46 R_⊙) to M6 (GJ 1245 B, M_ = 0.12 M_⊙, R_ = 0.14 R_⊙). The rotation periods vary between fractions of a day to tens of days, with GJ 1245 B (M6V) having the shortest rotation period (P_ rot = 0.71 d) and HD 219134 (K3V) the longest one (P_ rot = 42.2 d). Table <ref> contains the complete list of the sample stars and a summary of the stellar properties incorporated in our models. § RESULTS & DISCUSSION §.§ The effect of star properties on the wind structure The Alfvén surface (AS) is defined by the collection of points in the 3D space that fulfils the Alfvén radius criterion[The Alfvén radius (R_ A) is defined as the distance around a star at which the kinetic energy density of the stellar wind equals the energy density of the astrospheric magnetic field.]. Numerically, it is determined by finding the surface for which the wind velocity reaches the local Alfvén velocity, v_A = B/√(4 πρ), where B and ρ are the local magnetic field and plasma density, respectively. The Alfvén surface can be interpreted as the lever arm of the wind torque –the "position" at which the torque acts to change the angular rotation of the star[In other words, the angular momentum per unit mass within the stellar wind can be computed as if there were solid body rotation, at an angular velocity Ω_ = 2π/P_ rot, out as far as the Alfvén surface.]. The Alfvén Surface is used in numerical models to characterize (Ṁ_) and (J̇_) (e.g., ). We compute J̇_ by performing a scalar flow rate integration over the AS and another one over a closed spherical surface (S) beyond the AS to determine Ṁ_: Ṁ_ = ∫_Sρ (u·dA) J̇_ = ∫_ASΩρ R^2sin^2θ (u·dA) Here J̇_ is the component of the change in angular momentum in the direction of the axis of rotation. The distance to the Alfvén surface is represented by R. The angle between the lever arm and the rotation axis is denoted by θ, which depends on the shape/orientation of the AS with respect to the rotation axis (and accounted for in the surface integral). The stellar angular velocity is represented by Ω = 2 π/ P_ rot. The surface element is denoted by dA. Figure <ref> shows the AS of the stellar wind, with plasma streamers along with the equatorial section flooded with the wind velocity (U_ r) for three K stars in our sample (HIP 12545, HD 6569, 61 Cyg A). If we compare two stars with similar P_ rot but different B_ R^ max, we can clearly see that the size of AS increases with increasing magnetic field strength. This is a direct consequence of the dependence of the Alfvén velocity on these quantities (Eq. <ref>) and the distance from the star at which the Alfvén velocity is exceeded by the wind. For instance, for very active stars with stronger magnetic fields, the expected coronal Alfvén velocity is greater than for less active stars, increasing the radial distance that the wind velocity must travel to reach the Alfvén velocity. The associated Alfvén surface has a characteristic two-lobe configuration (Fig. <ref>, gray translucent area), with average sizes of 27 R_, 18 R_ and 13 R_ for HIP 12545, HD 6569, and 61 Cyg A, respectively (see Table <ref>). When we compare two stars with similar magnetic field strengths but different P_ rot (see Fig. <ref>, panels B and C), the change in AS size is not as dramatic. The rotation period has primarily a geometric effect on the resulting AS. The Alfvén surface assumes a different tilt angle in all three cases. This tilt is mainly connected to the open magnetic field flux distribution on the star's surface <cit.>. We also notice in Fig. <ref> that the stellar wind distribution is mainly bipolar with a relatively fast component reaching up to ∼ 891 km s^−1 for HIP 12545 ∼, 702 km s^−1 for HD 6569, and ∼ 593 km s^−1 for 61 Cyg A. In section <ref> we will discuss further the relation between the wind velocity with regard to P_ rot and B_ R. Figure <ref> shows the Ṁ_, J̇_, AS as estimated by the previously described method, against the sub-spectral type of our star sample (left column) and the average radial magnetic field strength (B_ R^ avg, right column). Similar relations have been obtained for the maximum radial magnetic field strength and are presented in Appendix <ref>. The average Alfvén surface size was calculated by performing a mean integral over the radius at each point of the 3D AS. The extracted quantities are represented by different colors and symbols for each spectral type (F, G, K, and M). As expected, the AS increases as we move toward more magnetically active stars (Fig. <ref>, top-right panel). From our simulations, we were able to establish a relation between AS and B_ R^ avg using the bootstrap technique (1000 realizations) to find the mean of the slope and the intercept along with their uncertainties. We use this approach to determine all relations from our simulations. The relation is as follows: logAS_ R = (0.42 ± 0.06) log B_ R^ avg + (0.71 ± 0.07) Our simulated steady-state Ṁ_ show a scatter within the range [0.5 Ṁ_⊙/R_⊙^2, 30 Ṁ_⊙/R_⊙^2], which is comparable to that estimated from the observed Lyα absorption method of G, K, and M-dwarfs in <cit.>. The variations in Ṁ_ are related to differences in the strength and topology of the magnetic field driving the simulations (see ), as well as to the Alfvén wave energy transfer to the corona and wind implemented in the model (). For this reason, we tried to isolate the effects introduced by the star (e.g., M_, R_, P_ rot, magnetic field strength) over the ones from the Alfvén wave heating (i.e., n_ o, T_ o, S/B_, L_⊥). In terms of mass loss rate, stronger winds are expected to be generated by stronger magnetic fields (see Fig. <ref>) implying that the winds are either faster or denser. This interplay determines Ṁ_ (Eq. <ref>), which increases with increasing magnetic field strength regardless of spectral type. We see a common increase for F, G, K, and M-dwarfs (excluding EV Lac) in the saturated and unsaturated regime that can be defined from the simulations as follows: logṀ_ / R^ 2_ = (0.48 ± 0.09) log B_ R^ avg + (0.11 ± 0.10) On the other hand, we observe a slightly different behavior for M-dwarfs, whose Ṁ_ and J̇_ values tend to be lower. As discussed by <cit.>, the magnetic field complexity could also affect Ṁ_ for a given field strength. We consider this possibility in the following section. Note that, as has been shown in previous stellar wind studies of M-dwarfs (e.g., ), modifications to the base AWSoM parameters (either in terms of the Poynting flux or the Alfvén wave correlation length) would lead to strong variations in Ṁ_. This would permit placing the M-dwarfs along the general trend of the other spectral types in particular, the Ṁ_ value obtained for the star with the strongest B_ R in our sample (EV Lac). While these modifications have physical motivations behind them (i.e. increased chromospheric activity, stronger surface magnetic fields), in most regards, they remain unconstrained observationally. Furthermore, the values we obtain in our fiducial AWSoM models are still within the range of observational estimates available for this spectral type (see Sect. <ref>), with the added benefit of minimizing the degrees of freedom and isolating the effects of the stellar parameters on the results. Similarly, we see a large scatter of J̇_ with respect to the spectral type (Fig. <ref>, bottom left column), ranging from 10^26  g cm^2 s^-2 to 10^31  g cm^2 s^-2. This range is within the expected J̇_ values estimated for cool stars with the lowest value corresponding to M-dwarfs ( and references therein). The maximum J̇_ values reached in our simulations are comparable to J̇_⊙ reached at solar minimum and maximum (7× 10^ 30 and 10 × 10^ 30 g.cm^2s^-2, ; ). We note that this is the only parameter for which we have retained units in absolute values (as is commonly done in solar/stellar wind studies; see ; ; ). Using absolute units, we expect a decrease in J̇_ as we move from F to M-dwarfs, since J̇_ is a function of R^2 (Eq. <ref>). The scatter around this trend is dominated by the relatively small Ṁ_ values, the distribution of Ω_ in our sample (variations up to a factor of 5), and the equatorial AS size where the maximum torque is applied (sinθ in Eq. <ref>). We also note that the sample is biased toward weaker magnetic field strengths. To better estimate how the magnetic field affects the properties of the stellar winds, we need a larger sample, not only in terms of stellar properties but also with stellar wind constraints such as Ṁ_. The latter is so far the only stellar wind observable parameter for which comparisons can be made. For this reason, we will focus on the behavior of the Ṁ_ as a function of different stellar properties in the following sections of the analysis. §.§ Stellar mass-loss rate and complexity Coronal X-ray luminosity is a good indicator of the level of magnetic activity of a star and the amount of material heated to 10^6 K temperatures. The dependence of magnetic activity on dynamo action (i.e., dynamo number D = R_ o^-2, ) has led a number of authors to use the Rossby number to characterize stellar activity, for a wide range of stellar types <cit.>. The Rossby number is defined as R_ o = P_ rot/τ_ c, where P_ rot is the stellar rotation period and τ_ c is the convective turnover time (). We adopted the approach of <cit.> to calculate τ_ c. In this case, the latter is only a function of the stellar mass (M_): logτ_ c = 2.33 - 1.50 (M_/M_⊙) + 0.31 (M_/M_⊙)^2 As it was mentioned in Sect. <ref>, the study of <cit.> suggests that coronal activity increases with Ṁ_. The overall increase in Ṁ_ with X-ray flux F_ X (Ṁ_∝ F_ X^0.77±0.04), is most likely due to their dependence on magnetic field strength (see Sect. <ref>). However, they report a scatter of about two orders of magnitude of Ṁ_ around the trend line. This suggests that coronal activity and spectral type alone do not determine wind properties. The geometry of the magnetic field may also play a role. The correlation between Ṁ_ and magnetic complexity has already been suggested by <cit.>, which could in principle contribute to the scatter in (, Fig. 10). The large-scale distribution of the magnetic field on the stellar surface is mainly determined by the rotation period and the mass of the star, namely R_ o (). The Rossby number was used to determine the complexity function in <cit.>, which was able to reproduce the bimodal rotational morphology observed in young open clusters (OCs). The complexity function of <cit.> is defined as n = a/R_ o+ 1 + bR_ o The constant 1 reflects a pure dipole. The coefficients a = 0.02 and b = 2 are determined from observations of OCs. The first term is derived from the ZDI map observation of stars with different spectral types and rotation periods. The third term is motivated by Kepler's observations of old stars (). We emphasize that the complexity number (n), estimated from Eq. <ref>, differs from the complexity derived from the ZDI maps themselves (e.g., ). The complexity number from R_ o is expected to be higher. This is due to the fact that many of the small-scale details of the magnetic field are not captured by ZDI. We expect to lose even more information about the complexity of the field given that the ZDI maps are not really available to the community (apart from the published images). Image-to-data transformation techniques (which we applied to extract the relevant magnetic field information from the published maps) can lead to some losses of information, both spatially and in magnetic field resolution. These vary depending on the grid and the projection used to present the ZDI reconstructions (i.e., Mercator, flattened-polar, Mollweide). Using the star's raw ZDI map would prevent these issues and would aid with the reproducibility of the simulation results. Finally, note that the expected complexity is also independent of the spherical harmonic expansion order used to parse the ZDI information to the simulations. The obtained R_ o and n values for each star in our sample are listed in Table <ref>. Figure <ref> shows the behaviour of coronal activity and Ṁ_ with respect to the expected magnetic field complexity (n). The coronal activity is denoted by full and empty symbols corresponding to saturated and unsaturated stars, respectively. We consider stars with R_ o≤ 0.1 in the saturated regime and stars with R_ o > 0.1 in the unsaturated regime based on X-ray observations (). The colors correspond to the different spectral types, whereas the numbers indicate the ID of each star in our sample. The symbol size represents the maximum radial magnetic field strength of each star extracted from the ZDI observations. We anticipate seeing a trend in which the Ṁ_ decreases as the magnetic field complexity increases (leading to an increment of closed loops on the stellar corona), for stars in saturated and unsaturated regimes. For instance, ϵ Eri (#10, B_ R^ max= 25 G, n = 2.21724) has an Ṁ_ = 4.53 Ṁ_⊙/R_⊙^ 2 lower than HD 6569 (#9, B_ R^ max=  29 G, n = 1.80346 ) with Ṁ_ = 6.70 Ṁ_⊙/R_⊙^ 2. This is also true for τ Boo and HD 179949 where τ Boo (#1, B_ R^ max= 14 G, n = 1.84728, Ṁ_ = 2.30 Ṁ_⊙/R_⊙^ 2) has a higher Ṁ_ compared to HD 179949 (#2, B_ R^ max= 12 G, n = 2.65746, Ṁ_ = 1.90 Ṁ_⊙/R_⊙^ 2). We also noticed that as we go to more active stars, like in the case of M-dwarfs, the field strength starts to dominate over the complexity in terms of contribution to the Ṁ_. For example, GJ 1245 B (#21, B_ R^ max= 404 G , n = 5.02602 , Ṁ_ = 9.27 Ṁ_⊙/R_⊙^ 2) has an Ṁ_ higher than DT Vir even though the complexity of the former is almost 5 times higher (DT Vir, #17, B_ R^ max= 327 G, n = 1.41024, Ṁ_ = 3.81 Ṁ_⊙/R_⊙^ 2). However, in order to better understand the contribution of the complexity in Ṁ_, we will need to run simulations for a wider range of stars with sufficiently high resolution of the driving magnetic field to capture directly the complexity of the field (and not estimate it from a scaling relation as it was performed here). Moreover, our results show that whenever we have a case in which the star properties (M_, R_, and P_ rot), magnetic field strength and complexity are comparable, we end up with similar Ṁ_. This will be the case of TYC 6878-0195-1 (#13, B_ R^ max= 162 G, n = 1.48069, Ṁ_= 17.42 Ṁ_⊙/R_⊙^ 2) and HIP 12545 (# 15, B_ R^ max= 184 G, n = 1.41505, Ṁ_ = 20.11 Ṁ_⊙/R_⊙^ 2). Furthermore, two stars with similar coronal activity with respect to X-ray flux, i.e., EV Lac and YZ CMi (F_ X≈ 10^7 ergs cm^-2 s^-1), but with slightly different magnetic field complexity, result in different wind properties: respectively Ṁ_ = 0.62 Ṁ_⊙/R_⊙^ 2, and Ṁ_ = 20.57 Ṁ_⊙/R_⊙^ 2. A similar situation occurs when two stars have a comparable field complexity but different coronal activity i.e., YZ CMi and GJ 205 (#18, Ṁ_ = 2.32 Ṁ_⊙/R_⊙^ 2, F_ X≈ 10^5 ergs cm^-2 s^-1). The lowest Ṁ_ corresponds to the saturated M-dwarf EV Lac (#19), which has the strongest B_ R (1517 G) and one of the simplest complexities in our sample (n = 1.46331). The low complexity of the field means that the wind is dominated by open field lines, leading to very high wind velocities in the standard AWSoM model, but with a very low density, which in turn leads to small Ṁ_ values. We remind the reader that the base density of the stellar wind is fixed at the stellar surface and is the same for all the stars in the sample (Sect. <ref>). §.§ Stellar wind mass-loss rate and Rossby number Using the results of our stellar winds models, we can study how the Ṁ_ changes as a function of the Rossby number (R_ o). The Rossby number is a useful quantity because it not only removes the dependence on spectral type, but also relates the rotation period to magnetic field strength, complexity, and even stellar coronal activity. The latter is also important because cool stars exhibit a well-defined behavior between L_ X (or F_ X) and R_ o (saturated and unsaturated regimes). Thus, if we analyze Ṁ_ using this parameter, we can see (to some extent) all dependencies simultaneously. Figure <ref> shows the stellar mass-loss rate per unit surface area (Ṁ_/R^ 2_) as a function of the Rossby number (R_ o). The circles show our 3D MHD numerical results, while the empty, filled, and the plus sign within a square corresponds to observational estimates of astrospheres <cit.>, slingshot prominences <cit.>, and absorption during an exoplanetary transit <cit.>, respectively. We use the same method as for the simulated stars (Eq. <ref>) to calculate the R_ o of stars with constraints on their mass loss rate. Spectral types are indicated by different colors: cyan (F), yellow (G), orange (K), and red (M). The Sun is represented by a yellow star symbol. Dashed lines connect the common stars in our models and the observations. In this section, we will focus only on the resulting Ṁ_ from the numerical results. As was mentioned earlier, our 3D MHD simulated Ṁ_ values are in the same range as the Ṁ_ estimates from the Ly-α astrospheric absorption method. Note that since we are only simulating steady-state stellar winds, our comparison is mostly focused on the steady mass loss Ṁ_ (filled squares and squares with a plus sign). As such, it is not surprising that our Ṁ_ values appear 1 - 2 orders of magnitude below the estimates associated with sporadic mass loss events such as slingshot prominences in very active stars in the saturated regime (filled squares, ). Based on the relation between F_ X and R_ o <cit.>, and the broad correlation observed between Ṁ_ and F_ X <cit.>, we expect to see traces of a two-part trend (albeit with significant scatter) between Ṁ_ and R_ o: a flat or saturated part that is independent of stellar rotation (R_ o≲ 0.1, rapidly rotating stars), and a power law showing that the stellar wind mass loss rate decreases with increasing R_ o (R_ o > 0.1, slowly rotating stars). For stars in the unsaturated regime, we do see a trend in which Ṁ_ increases with decreasing R_ o. The relationship between Ṁ_ and R_ o retrieved from our simulations is logṀ_/R_^2 = (-1.13± 0.23) log R_ o + (0.50± 0.07) . The majority of the Ṁ_ derived from observation appears to follow the established relationship Ṁ_–R_ o, with some scatter within the error range. We do, however, notice four outliers, including three K stars and one G star. The K stars with the high Ṁ_ correspond to the binary 70 Oph A (K0V) and 70 Oph B (K5V). As for the 3^ rd K star and the G star, they correspond to evolved stars: δ Eri (K0IV, Ṁ_ = 0.6 Ṁ_⊙/R_⊙^ 2, R o ∼ 21) and DK UMA (G4III-I, Ṁ_ = 0.0077 Ṁ_⊙/R_⊙^ 2, R_ o ∼ 2.51). We do not expect evolved stars to follow the same trend as unsaturated main sequence stars because their winds might be generated from a different mechanism (such as pulsations, see ). As for 70 Oph A and B, we do not have much insight into their eruptive activity levels in order to rule out whether or not the Ṁ inferred from the astropsheric technique was influenced by slingshot prominences or CME activity. As can be seen in Fig. <ref>, our numerical results in this region are essentially bracketed by the observations for which the R_ o reaches larger values. The largest Rossby number from our star sample corresponds to HD 219134 (K3V, R_ o = 2.02732), which is comparable to the accepted solar value. Since our models use ZDI maps as inner boundary conditions to simulate stellar winds, this implies that extending our numerical models to even larger R_ o would be very challenging as those ZDI reconstructions would require prohibitively long observing campaigns. While we have limited data points, we see that for objects with R_ o≲ 0.15, we do not obtain larger numerical values Ṁ_ even when the magnetic field strengths increase dramatically. For example, in the case of YZ CMi (B^ max_ R = 822 G, Ṁ_/R_^ 2 = 20.57 Ṁ_⊙/R_⊙^ 2) and GJ 1245 B (B^ max_ R = 404 G, Ṁ_ = 9.27 Ṁ_⊙/R_⊙^ 2). All stars on the left-hand side of Fig. <ref> lie beneath the maximum Ṁ_ value obtained for YZ CMi (B^ max_ R = 822 G, Ṁ_/R_^ 2 = 20.57 Ṁ_⊙/R_⊙^ 2). This is true even when R_ o varies by more than one dex, magnetic field strength by factors of 100, and the expected complexity number by ∼ 4. These results indicate that the contribution from the steady wind will only account for a small fraction of the Ṁ_ budget in the case of very active stars. Furthermore, the obtained behaviour hints of a possible saturation of the steady-state stellar wind contribution to Ṁ_, while the star could still lose significant mass through other mechanisms such as slingshot prominences or CME activity due to flares among others. According to <cit.> and references therein, cool stars can support prominences if their magnetospheres are within the centrifugal regime (i.e. R_ K < R_ A, where R_ K = √(GM_/Ω_^2) is the co-rotation radius). They provide estimates for the prominence masses (m_ p) and the ejection time-scales (t_ p) for a sample of cool stars. According to their analysis, DT Vir would have m_ p = 1.5 × 10^ 15 g and t_ p = 0.1 d, while the values for GJ 1245 B would be m_ p = 4.4 × 10^ 14 g, t_ p = 0.3 d. Using these values, they also reported the expected mass loss rate from prominences for these two stars in absolute units. In order to compare with the steady state wind, we convert their results to units of Ṁ_⊙/R_⊙^ 2. For DT Vir we have Ṁ_^ p/R_^ 2 = 0.49 Ṁ_⊙/R_⊙^ 2 and for GJ 1245 B the resulting value is Ṁ_^ p/R_^ 2 = 0.68 Ṁ_⊙/R_⊙^ 2. For the CMEs contribution, we can obtain an order of magnitude estimate by following the approach in <cit.>. They estimate the mass-loss rate from the CME (Ṁ_^ CME) as a function of L_ X and the power law index (α) of the flare frequency distribution. For the X-ray luminosity, we used the <cit.> database, and for the flare frequency distribution exponent we took α = 2 <cit.>. For DT Vir, with log(L_ X) = 29.75, we obtain Ṁ_^ CME/R_^ 2 ∼ 160 Ṁ_⊙/R_⊙^ 2. For GJ 1245 B, with log(L_ X) = 27.47, the estimated CME-mass loss rate is Ṁ_^ CME/R_^ 2 ∼ 12.8Ṁ_⊙/R_⊙^ 2. We emphasize here that this approach assumes that the solar flare-CME association rate holds for very active stars (see the discussion in ). As such, it does not consider the expected influence due to CME magnetic confinement (e.g. ) which currently provides the most suitable framework to understand the observed properties of stellar CME events and candidates <cit.>. Still, we can clearly see that the input from CMEs to the total Ṁ_ could be higher than the steady wind and prominences for these two stars (with the latter contributing less in these cases). For instance, the estimated contribution of CMEs to the total Ṁ_ of DT Vir is almost 40 times higher than the value obtained for the steady stellar wind. We will discuss the cases of EV Lac and YZ CMi in Section <ref> §.§.§ Comparison between simulations and observations In addition to analyzing the general trends, we can compare the models for common stars between our sample and the observations in <cit.> and references therein. The stars in <cit.> contain a total number of 37 stars with a mix of main-sequence and evolved stars. The sample includes 15 single K-G stars among them 4 evolved stars, and 4 binaries. <cit.> reports individual Ṁ values for the G-K binary pairs (this means that it was possible to model their individual contribution to the astrosphere of the system or they were separated enough not to share a common astrosphere). This is important as, in principle, one could treat the binary pairs as individual stars. The rest of the star sample includes 22 M-dwarfs with 18 single M-dwarfs, 3 binaries, and 1 triple system. Unlike the G-K stars, Ṁ_ values for the M-dwarf binaries/triple system are listed as a single value (therefore, it means that it has to be taken as the aggregate of all the stars in the system). For the binary system GJ 338 AB we were unable to include it in the plot of Fig. <ref> due to a lack of needed information to estimate its R_ o. Following on the results from Sect. <ref>, our simulated mass loss rates for stars in the unsaturated regime agree well with those estimated from astrospheric detections (see Fig. <ref>). Specifically, for GJ 205 (M1.5V), 61 Cyg A (K5V), and HD 219134 (K3V) we obtain Ṁ_/R_^ 2 of 2.32, 3.98, and 1.50, respectively. These values are all consistent with their respective observational estimates, taking into account the typical uncertainties of the astrospheric absorption method[Astrospheric estimates on Ṁ_ should have an accuracy of about a factor of 2 with substantial systematic uncertainties <cit.>.]. While further observations could help to confirm this, the agreement between our asynchronous models and the observations indicates that, within this R_ o range, the temporal variability of Ṁ_ is minimal. This is certainly the case for the Sun (R_ o∼ 2.0) in which long-term monitoring has revealed only minor variability of the solar wind mass loss rate over the course of the magnetic cycle (, ). On the other hand, Ṁ_ from the 3D MHD simulations appear to fall short by an order of magnitude or more from the available estimates for ϵ Eri (K2V), EV Lac (M3.5V), and YZ CMi (M4.5V) with Ṁ_/R_^ 2 of 4.53, 0.62 and 20.57, respectively. We will discuss different possibilities for these discrepancies on each star in Sect. <ref>. However, it is important to remember that the Ṁ_ estimates from the Ly-α absorption technique contain systematic errors that are not easily quantified. One example is that they depend on the assumed properties and topology of the ISM <cit.>, which have not been fully agreed upon in the literature (e.g., ). While studies have provided a detailed characterization of the local ISM (see ), intrinsic uncertainties and additional observational limitations can greatly alter the estimated mass-loss rate values. These include column densities, kinematics, and metal depletion rates (), as well as local temperatures and turbulent velocities <cit.>. Furthermore, we would also like to emphasize the variation of the Ṁ_ in the astrospheric estimates with the assumed stellar wind velocity, as we believe that this factor is one of the largest potential source of uncertainty and discrepancy with our models. As discussed by <cit.>, this parameter is used as input in 2.5D hydrodynamic models to quantify the stellar wind mass loss rate. The Ly-α absorption signature, leading to Ṁ_, is determined to first order by the size of the astrosphere. The latter depends on the stellar wind dynamic pressure (P_ dyn∝Ṁ_ U_ sw), which implies an inverse relation between Ṁ_ and U_ sw <cit.>. The astrospheric analysis of <cit.> assumed a stellar wind velocity of 450 km s^-1 at 1 au (matching models of the heliosphere) for all main-sequence stars. However, we find that stellar wind velocities can vary significantly between different types of stars and even among the same spectral type for different magnetic field strengths and rotation periods. To quantify this, we compute the average terminal velocity of the wind, (U_ R^ T), by averaging U_ R over a sphere extracted at 99% of the maximum extent of each simulation domain (594 R_ for F, G, and K stars and 248 R_ for M-dwarfs; see Sect. <ref>). In the cases in which the spatial extension of our numerical domain allowed, we also computed the average wind velocity at 1 au. The resulting values, listed in Table <ref>, indicate variations in the wind velocity by factors of 5 or more when moving from F-type stars (U_ R^ T∼ 325 km s^-1) to M-dwarf (U_ R^ T∼ 1500 km s^-1). This is also illustrated in Fig. <ref>, which portrays the simulated stellar wind environment for HD 179949 (F8V), HD 73256 (G8V), HD 189733 (K2V), and DT Vir (M0V). We include a green iso-surface that corresponds to the wind velocity at 1 au for F, G, and K stars as for M-dwarfs it represents the average terminal wind velocity in the domain. The visualizations also include the equatorial projection of the wind dynamic pressure (P_ dyn = ρ U^ 2), normalized to the nominal Sun-Earth value, as well as on a sphere highlighting the wind 3D structure at 0.5 au. What is clear from this analysis is that is not ideal to use the same wind velocity for all spectral types. Even within the same spectral type, we can observe a wide range of terminal velocities (e.g., the velocity in K stars ranges from 400 km s^-1 to 700 km s^-1). As such, for models that require wind velocity as an input parameter, we recommend using the average radial wind velocity among a given spectral type. For G-K stars, we obtain wind velocities at 1 au in the range of 400 to 700 km s^-1 which is not too different from the wind velocity assumption of . This is also consistent with the fact that for these spectral types, we have a better agreement between Ṁ estimated from our simulations and those from the astropsheric technique <cit.>. For lower mass stars with relatively small R_ o we obtain velocities higher than 450 km s^-1 up to 3675 km s^-1. Note that due to computational limitations, the extent of our M-dwarf simulations does not reach up to 1 au (varying from 0.6 au for DT Vir to 0.16 au for GJ 1245 B). Nevertheless, as indicated by the calculated terminal velocities, even at closer distances the wind velocity is already >450 km s^-1, a situation that should still hold when propagated out to 1 au. Wind velocities on the order of 1000-1500 km s^-1 at distances of 1 au and beyond had been reported in high-resolution AWSoM simulations of the environment around the M5.5V star Proxima Centauri <cit.>. This helps to explain why our simulated mass-loss rates for EV Lac, YZ CMi, and ϵ Eri were lower than the observed ones (differences larger than a factor of 2). We discuss these cases in more detail in the following section. §.§.§ Exploring the cases of EV Lac, YZ CMi, ϵ Eri * YZ CMi & EV Lac Frequent stellar flares have been observed at YZ CMi in several wavelength ranges (). The flaring energy distribution of this star ranges from 10^30.6 to 10^34.09 erg <cit.> with a total flaring time that varies from 21 to 306 minutes. Likewise, there is also significant flare activity on EV Lac (). From spectroscopic and photometric studies of EV Lac, <cit.> reports to have found 27 flares (∼ 5.0 flares per day) in H α with energies between 1.61 × 10^31 erg −1.37 × 10^32 erg and 49 flares (∼ 2.6 flares per day) from the TESS lightcurve with energies of 6.32 × 10^31 erg −1.11 × 10^33 erg. With such high flare activity, it is possible that a large fraction of the Ṁ_ estimated in <cit.> for these stars could arise from transient phenomena (e.g., prominences, CMEs). Following the same approach described at the end of Section <ref>, we can obtain a rough estimate of Ṁ from CMEs for EV Lac and YZ CMi. For EV Lac we find Ṁ_^ CME/R_^ 2 = 55.5 Ṁ_⊙/R_⊙^ 2 assuming log(L_ X) = 28.69. In the case of YZ CMi, an log(L_ X) = 28.53 yields Ṁ_^ CME/R_^ 2 = 47.6 Ṁ_⊙/R_⊙^ 2. However, given the magnetic field strength observed in EV Lac and YZ CMi (a few kG, ), we expect that the magnetic confinement of CMEs would play an important role in these objects (see , , ). Therefore, it is not straightforward to estimate exactly how large the contribution of CMEs to Ṁ_ is for these stars. In addition, as discussed by <cit.>, EV Lac and YZ CMi are considered in the slingshot prominence regime. For EV Lac they estimate m_ p = 2.0 × 10^ 16 g and t_ p = 0.6 d, while for YZ CMi values of m_ p = 4.5 × 10^ 16 g and t_ p = 0.6 d are given. Using the associated mass loss rate values reported in <cit.>, we obtain Ṁ_^ p/R_^ 2 = 3.16 Ṁ_⊙/R_⊙^ 2 for EV Lac and Ṁ_^ p/R_^ 2 = 8.32 Ṁ_⊙/R_⊙^ 2 for YZ CMi. This suggests another possible explanation for the discrepancies between our models and the astrospheric estimates is that some of the stellar wind detected for EV Lac and YZ CMi contains material from the slingshot prominences. Indeed, the location of the latter in the Ṁ_ – R_ o diagram (Fig. <ref>) appears more consistent with the mass loss rate estimates from slingshot prominences by <cit.>. Moreover, <cit.> noted that the YZ CMi astrospheric absorption comes primarily from neutrals near and inside the astropause, rather than from the hydrogen wall where neutral H density is highest. Therefore, using Ly alpha absorption to calculate Ṁ_ from YZ CMi will result in substantial uncertainty. Finally, as mentioned in Sect. <ref>, there is a significant difference between the wind velocity assumed by <cit.> and our results. Our average terminal wind velocity for YZ CMi (1709 km s^-1) and EV Lac (3675 km s^-1) is significantly higher than the wind velocity of 450 km s^-1 assumed in <cit.> at 1 au. While the wind velocity in EV Lac might be overestimated in our models (due to the usage of fiducial AWSoM parameters), we still expect relatively large wind velocities for this star (∼ 1000-1500 km s^-1) given its magnetic field strength and Rossby number (see e.g., ). As was discussed in Sect. <ref>, while our terminal wind velocity for M-dwarfs is calculated closer to the star (0.33 au for YZ CMi and 0.16 au for EV Lac), we do not expect a large reduction in the average velocity between these distances and 1 au. As such, the fast wind velocity resulting in our simulations of YZ CMi and EV Lac would imply lower Ṁ_ values when analyzed following the astrospheric technique of <cit.>. * ϵ Eri With a relatively slow rotation period (11 d), and weak large-scale magnetic field (< 50 G), ϵ Eri cannot be considered within the slingshot prominence regime (like in the cases of YZ CMi and EV Lac). Because of this, we do not expect a significant presence of slingshot prominences in the Ṁ value of this star. On the other hand, the analysis of <cit.>, estimated the contribution of flare-associated CMEs to the mass loss rate. They reported an upper limit of 1.09 Ṁ_⊙ / R _⊙^ 2, which is insignificant when compared to the star's overall estimated Ṁ_ value by <cit.> and the astrospheric technique (56 Ṁ_⊙ / R _⊙^ 2). Therefore, the contribution from CMEs is also most likely not responsible for the elevated astrospheric Ṁ_ value on this star and its discrepancy with our steady-state models. On the other hand, multiple observations of the large-scale magnetic field geometry of ϵ Eri reveal that it evolves over a time-scale of months <cit.>. According to <cit.>, the maximum field strength can reach up to 42 G. As shown in Fig. <ref>, a global increase in the magnetic field strength causes an increase in Ṁ_. The Zeeman Doppler Imaging map of ϵ Eri used to drive the 3D MHD model has a B_ R^ max = 25 G leading to Ṁ_/R_^2 = 4.53 Ṁ_⊙/R_⊙^2. This value is comparable to the numerical result obtained by <cit.> for this star (Ṁ_/R_^2∼ 5.3 Ṁ_⊙/ R_⊙^ 2). Increasing the surface magnetic field strength of ϵ Eri to the maximum value reported in observations will raise the mass loss rate to ∼ 10 Ṁ_⊙/ R_⊙^ 2. As such, the variability of the stellar magnetic field and its expected modulation of the stellar wind properties could account for some of the differences between the simulated and the observed mass loss rates. However, corroborating this would require contemporaneous ZDI and astrospheric measurements which, to our knowledge, have not been performed on any star so far. As ϵ Eri goes through a magnetic/activity cycle (), we can expect relatively large variations in Ṁ_ values in our Alfvén-wave driven stellar wind models. Finally and following the discussion for YZ CMi and EV Lac, the average wind velocity for ϵ Eri at 1 au (554 km s^-1) resulting from our models exceeds the one assumed in <cit.>. This will result in a smaller estimated Ṁ_ value from the pressure-balance astrospheric technique. In this way, the deviation between our models and the astrospheric detection of ϵ Eri could be due to the combined contribution of all the preceding elements (i.e., CMEs, cycle-related variability of the magnetic field, higher stellar wind velocity), and therefore we do not consider this discrepancy critical to our analysis. §.§ Stellar wind and Circumstellar region This section focuses on using the stellar wind results obtained from the 3D MHD simulations to assess the conditions an exoplanet would experience. This includes the characterization of the Alfvén surface for the various stellar wind solutions, the properties of the stellar wind in the habitable zone of these stars (in terms of the dynamical pressure of the wind), and the resulting magnetosphere size for these stellar wind conditions (assuming that a planet with the same properties/magnetization as Earth is in the HZ of these stars). The obtained quantities are listed in Table <ref> and <ref>. §.§.§ Stellar wind properties and orbital distances * Alfvén surface size Figure <ref> summarizes our results showing the stellar wind environment around cool main sequence stars. We include the average size of the AS, resulting from our 21 3D MHD models, indicated in filled diamonds. To complement this information, empty diamonds correspond to the expected average AS size employing the scaling relation provided in Sect. <ref>, and using the ZDI information from 29 additional stars ( and reference therein). The green region corresponds to the optimistic HZ, calculated using the approach provided by <cit.> and the expected behaviour of the luminosity, temperature as a function of stellar mass on the main sequence (). Each square indicates the limits of the optimistic HZ for each star in our sample. These have been color-coded by the stellar wind dynamic pressure, normalized to the average Sun-Earth value. The position of the Earth is indicated by the ⊕ symbol. In the background, a sample of the semi-major axis of some exoplanets is included. There are a few noteworthy aspects of Fig. <ref>. First of all, the 3D MHD simulated AS values (filled diamonds) do not show a clear trend with stellar mass. Instead, we see more or less similar AS regardless of the spectral type of the star (Table <ref>). We see a similar behavior for stars whose AS were extracted from the scaling relationship presented in Eq. <ref> (empty diamonds). There is a significant scatter in the obtained distribution of AS against M_, indicating that the intrinsic dependency with the surface magnetic field properties can in principle be replicated among multiple spectral types. However, we remind the reader that this result is also partly a consequence of our fixed choice for the base parameters of the corona and stellar wind solution (Sect. <ref>), which could in principle vary among different spectral types and activity stages (i.e. ages). As such, the generalization of the results presented here requires further investigation from both, observational constraints and numerical simulations. We can also see that for late K and M-dwarfs, AS reaches orbital distances comparable to their HZ limits. Examples of this from our sample are GJ 1245 B (AS = 0.028 au, HZ_ inner = 0.033 au) and YZ CMi (AS= 0.178 au, HZ_ inner= 0.09 au). This situation has been also identified in previous case studies of stellar winds and exoplanets (e.g, ). The location of the HZ relative to the stellar Alfvén surface must be considered when studying the interactions between a star and a planet. A planet orbiting periodically or continuously within the AS region could be directly magnetically connected to the stellar corona, which could have catastrophic effects on atmospheric conservation <cit.>. On the other hand, a planet with an orbit far outside this limit will be decoupled from the coronal magnetic field and interact with the stellar wind in a manner similar to the Earth (e.g. ). In the case of a planet orbiting in and out of the AS, the planet will experience strongly varying wind conditions, whose magnetospheric/atmospheric influence will be greatly mediated by the typical time-scale of the transition <cit.>. * Dynamic pressure We also see a general trend in Fig. <ref> in which the dynamic pressure at the HZ boundaries increases as we move from earlier to later spectral types. For example, P_ dyn^ Inn, HZ for the lowest-mass star GJ 1245 B is 447.39 P_⊕ nearly 200 times stronger than for the highest-mass star τ Boo with P_ dyn^ Inn, HZ = 2.76 P_⊕. Our results also show a large variability in P_ dyn as we move from the inner to the outer edge of the HZ of G, K, and M-dwarfs (Table <ref>). For these stars, the P dyn at the inner HZ is almost 6 times stronger than that at the outer edge of the HZ (i.e., EV Lac P_ dyn^ Inn, HZ = 33.18 P_⊕, P_ dyn^ Out, HZ = 6.27 P_⊕). For F stars, the difference is smaller, around a factor of 2 like in the case of HD 179949, where P_ dyn^ Inn, HZ = 2.51 P_⊕ and P_ dyn^ Out, HZ = 0.48 P_⊕. The reason is that the HZs of these stars are farther from the star, where the wind density starts to become less variable. Moreover, in some cases, we have P_ dyn at the inner and outer edge of the star HZ comparable to the typical range experienced by the Earth (0.75 and 7 nPa, ). For example, HD 73256 (G8V, 6.45 - 1.18 P_⊕ ∼ 9.675 - 1.77 nPa), HD 130322 (K0V, 2.56 - 0.46 P_⊕ ∼ 3.84 - 0.69 nPa), τBoo (F7V, 2.76 - 0.40 P_⊕ ∼ 4.14 - 0.6 nPa). For the case of M-dwarfs, we have dynamic pressures higher than those experienced by Earth, as in the case of DT Vir (M0V, 88.97 - 16.20 P_⊕ ∼ 133.455 - 24.3 nPa). This is because the HZ is located near the star where the density is highest. This indicates that planets orbiting at very close distance to the star (∼ 0.03 - 0.05 au) would experience extreme space weather conditions with P_ dyn up to 10^3 and 10^4 P_⊕. These values are comparable to the ones estimated in <cit.> for Proxima d and for Proxima b in <cit.>. However, the reader is reminded here that any point from our simulations should be interpreted as an indication of the average conditions, but should not be treated as a specific absolute value (since it will change depending on the instantaneous local density and velocity of the wind (both a function of the evolving stellar magnetic field). In addition, we notice a scatter in P_ dyn estimates at the HZ when comparing stars of the same spectral type. This is not surprising since the P_ dyn depends on the wind velocity and density at a given place. This also translates into having a range of dynamic pressure that a planet will experience within the HZ. This will defer from one orbital distance to the other as we can see in Fig. <ref> where we show the equatorial plane color-coded by the dynamic pressure. We can use our 3D models to investigate also the influence due to the orbital inclination. To illustrate this, Fig. <ref> shows a 2D projection of the normalized dynamic pressure P_ dyn extracted from spherical surfaces matching the midpoint of the HZ of HD 179949 (F8V), TYC 198-509-1 (G7V), 61 Cyg A (K6V), and GJ 205 (M1.5V). We notice that in the case of F and G stars (i.e., HD 179949, and TYC-198-509-1) we have a large P_ dyn variation with inclination around a factor 7. However, P_ dyn values, are still relatively small in terms of absolute units (i.e., 0.01 - 10 P_⊕ ∼ 0.015 - 15 nPa). For K and M-dwarfs, we see less variability in the P_ dyn for the different inclinations, a more homogeneous P_ dyn, especially in the case of the K star. However, in these cases, the P_ dyn can reach values > 100 P_⊕ (> 150 nPa). Our results also show that even with an extreme orbit around the G-type star (TYC 198-509-1) with an inclination matching the current sheet, we would most likely not reach the very high P_ dyn values as in the case of the K and M-dwarfs as we move closer to the star. As such, the inclination of the orbit plays a secondary role compared to the distance. This is clearly seen in the color gradient that gets redder and redder as we move toward lower masses (so the HZ is closer). On the other hand, the variability of P_ dyn, which we can see in Fig. <ref> while represented in the same 'spatial scale', it does not coincide in terms of 'temporal scales'. In other words, the x-axis in Fig. <ref> do not correspond to the same timescale units for each star, where the 360 degrees of longitude correspond to “1 orbital period”. However, the orbital period is very different for a planet in the HZ of an F-type star (within a few au) compared to a planet orbiting an M-dwarf (within a fraction of an au). A planet orbiting an M-dwarf star experiences the variations in P_ dyn on a much faster timescale (∼ 1 day for each current sheet crossing), while these variations are much longer for more massive stars. This means that even if the P_ dyn values were the same, the faster variability over the orbital period for low-mass stars would result in planets and their magnetospheres/atmospheres having less time to recover from passing through regions of high P_ dyn than planets around more massive stars. Finally, following the results compiled by <cit.>, if we consider the presence of a rocky exoplanet with an atmosphere similar to those of Venus and Mars at those mid-HZ locations, we would expect atmospheric ion losses between 2 × 10^24 ions s^-1 and 5 × 10^24 ions s^-1. This of course assumes that all processes occur in the same way as in the solar system (which might not be necessarily true for some regions of the vast parameter space of this problem). The ion losses will depend heavily on the type of stars that the exoplanet orbits, both in terms of the high-energy spectra and the properties of the stellar wind (see e.g. ). If the rocky exoplanet is found around the HZ of an M-dwarf, the planet might suffer from unstable stellar wind conditions as previously stated that might increase the ion losses in the exoplanetary atmosphere. We will consider the case of Earth with its magnetosphere in the following section. §.§.§ Magnetopause Standoff Distances Using the dynamic pressure, we can define a first-order approximation to determine the magnetosphere standoff distance (R_ M) of a hypothetical Earth-like planet orbiting at the HZ around each star in our sample. This is done by considering the balance between the stellar wind dynamic pressure and the planetary magnetic pressure (Eq. <ref>, ): R_ M = R_ E [B_ p^2/ 8 π P_ dyn]^1/6 The Earth's equatorial dipole field and radius are represented by B_ p and R_ E respectively. Normally the total wind pressure should be considered (i.e., thermal, dynamic, and magnetic), but in all the cases here considered, we can neglect the contributions of the magnetic and thermal pressures. For this calculation, we assume an equatorial dipole magnetic field of 0.3 G, similar to that of the Earth <cit.>. The magnetospheric standoff distance is expressed in Earth's radii (Eq. <ref>). The different R_ M, HZ values for the different stars in our sample are listed in table <ref>. Note that we only estimate the R_ M in the cases where the HZ is in the super-Alfvénic regime (). Our estimated R_ M, HZ for F, G, and early K stars have values closer to the standard size of Earth's dayside magnetosphere (∼ 10 R_⊕, see ). This is comparable to the value obtained by <cit.> for Proxima c (∼ 6 - 8 R_⊕ in both activity levels), assuming an Earth-like dipole field on the planet surface. For the late K and M-dwarfs in our star sample, R_ M starts to reach lower values < 50% from that of Earth. This suggests that a planet orbiting these stars must have a stronger dipole magnetic field than that of the Earth to withstand the wind conditions since R_ M∝ B_ p ^1/3. However, in <cit.> they show that contrary to what we have seen so far, the magnetosphere might actually not act as a shield for the stellar wind-driven escape of planetary atmospheres. In fact, they reported an ion loss for Earth that ranges from 6 × 10^24 ions s^-1 - 6 × 10^26 ions s^-1 which is higher than what Venus and Mars lose. Further modeling studies are needed in order to characterize the stellar wind influence on the atmospheric loss of rocky exoplanets (e.g., ), whose input stellar wind parameters can be extracted from this investigation. § SUMMARY & CONCLUSIONS In this study we employed a state-of-the-art 3D MHD model (SWMF/AWSoM) to investigate the dependencies between different star properties (R_, M_, B_ R, and P_ rot) and a number of stellar wind parameters (AS, Ṁ_, J̇_, P_ dyn) of cool main sequence stars. We present numerical results of 21 stars going from F to M stars with magnetic field strengths between 5 and 1.5 kG and rotation periods between 0.71 d and 42.2 d. The large-scale magnetic field distribution of these stars, obtained by previous ZDI studies, were used to drive the solutions in the Stellar Corona domain, which are then self-consistently coupled for a combined solution in the Inner Astrosphere domain in the case of F, G, and K stars. Our results showed a correlation between the average AS size and B_ R^ avg, regardless of the spectral type of the star (Eq. <ref>). We also obtained a strong correlation between Ṁ_ and B_ R^ avg for the different spectral types (excluding EV Lac, Eq. <ref>). The correlation between J̇_ and B_ R, on the other hand, was dominated by the absolute dependence on the stellar size, with significant scatter resulting mainly from the variability in Ṁ_, the distribution of Ω_ in our sample and the equatorial AS size where the maximum torque is applied. Having established these star-wind relations, we looked in detail at Ṁ_, since it is the only observable parameter of the stellar wind for which comparisons can be made. Using the complexity number as a function of the Rossby number R_ o–defined previously in the literature– we were able to investigate the dependence of magnetic complexity on Ṁ_. Our results showed that for more active stars, as in the case of M-dwarfs, the field strength starts to dominate over the complexity in the contribution on shaping Ṁ_. Also, for cases in which the magnetic field strength and complexity were comparable, we obtained similar Ṁ_. This indicates that in these cases the stellar properties (R_, M_, and P_ rot) play a secondary role in changing Ṁ_. We then used our stellar wind results to investigate its behaviour with respect to the well-known stellar activity relationship (F_ X vs R_ o with the saturated and unsaturated regimes). For stars in the unsaturated regime, we see a trend where Ṁ_ increases with decreasing R_ o (Eq. <ref>). For stars in the saturated regime, we find that the contribution of the steady wind is only a small part of the Ṁ_ budget. This suggests that there could be saturation in Ṁ_ due to the steady stellar wind, while the star could lose even more mass through other mechanisms, such as transient events (i.e. prominences, coronal mass ejections). In addition to analyzing the general trends, we compared the model results of stars in our sample and objects with astrospheric Ṁ_ constraints. Our simulated Ṁ_ for stars in the unsaturated regime agree well with those estimated from astrospheric detections (namely for GJ 205, 61 Cyg A, and HD 219134). On the other hand, Ṁ_ from the 3D MHD simulations appear to differ by an order of magnitude or more from available estimates for ϵ Eri, EV Lac, and YZ CMi. We discussed how these results might be connected with the underlying assumption made by the observational analysis with respect to the stellar wind speed. Indeed, for all the stars in which our models differed largely from the literature estimates, we obtained much larger stellar wind speeds than the ones used in the astrospheric method. As such, we emphasized the importance of using the appropriate wind velocity when estimating Ṁ_ from observations. We further discussed various possibilities for the discrepancies in EV Lac, YZ Cmi, ϵ Eri. For the two flaring stars, EV Lac and YZ CMi, we suspect that the high Ṁ_ estimates from the Ly-α absorption technique could be dominated by material from slingshot prominences and possibly CMEs (uncertain due to the expected magnetic confinement of CMEs in these stars). Note that this possibility was also considered by <cit.> in the original astrospheric analysis. In the case of ϵ Eri, we do not expect a large contribution from prominences or CMEs to the observed Ṁ_. However, as ϵ Eri undergoes a magnetic cycle, the stellar magnetic field and its expected modulation of stellar wind properties could explain some of the differences between the simulated and observed Ṁ_. Moreover, we used the stellar wind results from the 3D MHD simulations to assess the conditions that an exoplanet would experience, and provide the stellar wind conditions in the entire classical Habitable Zones of our target stars. Our results show a scatter in the obtained distribution of AS versus M_, suggesting that the intrinsic dependence with the surface magnetic field properties can be reproduced for several spectral types. With respect to the stellar wind dynamic pressure, our results show that the orbital inclination plays a secondary role compared to the orbital distance. We have also found that a planet orbiting K and M stars must have a stronger dipole magnetic field than that of Earth to withstand the wind conditions, if the planetary magnetic field is indeed acting as a shield (this paradigm, however, is starting to be challenged by solar system observations). Finally, the properties of the stellar wind in the HZ of different spectral types obtained here can be used in future studies to, for instance, estimate the expected radio emission due to wind-magnetosphere interactions or the planetary atmospheric mass loss due to erosion of the stellar wind from ion escape processes. § ACKNOWLEDGEMENTS The authors would like to thank the referee for valuable comments that improved the quality of the paper. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (<www.gauss-centre.eu>) for funding this project by providing computing time on the GCS Supercomputer SuperMUC-NG at Leibniz Supercomputing Centre (<www.lrz.de>) under application ID 21761 (PI: Alvarado-Gómez). JJC and KP acknowledge funding from the German Leibniz Community under project number P67/2018. CG was supported by NASA contract NAS8-03060 to the Chandra X-ray Center. This research has made use of NASA’s Astrophysics Data System Bibliographic Services. § DATA AVAILABILITY The data would be made available to the community on reasonable request due to the volume of the 3D simulations. Extractions of specific quantities discussed in the paper could be requested from the corresponding author. mnras § TRENDS WITH MAXIMUM RADIAL MAGNETIC FIELD We have also quantified AS_ R and Ṁ_ / R^ 2_ as a function of the absolute maximum radial magnetic field strength (|B_ R|^ max|). It is important to also investigate |B_ R|^ max, since the average radial magnetic field strength may suffer from cancellations, especially if the star has a symmetric surface magnetic field distribution. Figure <ref> shows the simulated average Alfvén surface area (AS, top) and the mass-loss rate per unit surface area (Ṁ_/R^ 2_, bottom) as a function of the maximum absolute radial magnetic field on the stellar surface (|B_ R|^ max). We see a trend where AS and Ṁ_/R^ 2_ increase with increasing magnetic field strength. We fit this trend to a power law by applying the bootstrap method used to derive this parameter as a function of the average radial magnetic field (similar to the procedure used in Sect. <ref>, Eqs. <ref> and <ref>). logAS_ R = (0.44 ± 0.05) log |B_ R|^ max + (0.54 ± 0.08) logṀ_ / R^ 2_ = (0.83 ± 0.07) log |B_ R|^ max - (0.48 ± 0.10)
http://arxiv.org/abs/2307.03944v1
20230708095104
Enhanced Strong Coupling between Spin Ensemble and non-Hermitian Topological Edge States
[ "Jie Qian", "Jie Li", "Shi-Yao Zhu", "J. Q. You", "Yi-Pu Wang" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall", "physics.optics" ]
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China Hefei National Laboratory, Hefei 230088, China [email protected] Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China [email protected] Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China Light-matter interaction is crucial to both understanding fundamental phenomena and developing versatile applications. Strong coupling, robustness, and controllability are the three most important aspects in realizing light-matter interactions. Topological and non-Hermitian photonics, have provided frameworks for robustness and extensive control freedom, respectively. How to engineer the properties of the edge state such as photonic density of state, scattering parameters by using non-Hermitian engineering while ensuring topological protection has not been fully studied. Here we construct a parity-time-symmetric dimerized photonic lattice and generate complex-valued edge states via spontaneous PT-symmetry breaking. The enhanced strong coupling between the topological photonic edge mode and magnon mode in a ferromagnetic spin ensemble is demonstrated. Our research reveals the subtle non-Hermitian topological edge states and provides strategies for realizing and engineering topological light-matter interactions. Enhanced Strong Coupling between Spin Ensemble and non-Hermitian Topological Edge States Yi-Pu Wang August 12, 2023 ======================================================================================== Introduction.—Topology has evolved as a powerful governing principle for predicting and harnessing the robust propagation of currents in various systems, including condensed matter system <cit.>, acoustics <cit.>, mechanics <cit.> and photonics <cit.>. In topological photonics, a topological invariant ensures robust localization or propagation of electromagnetic waves <cit.>. On the other hand, non-Hermitian photonics <cit.> has also flourished in recent years, not only due to the ubiquitous non-Hermiticity in nature <cit.>, but also because the non-Hermiticity provides additional degrees of freedom to manipulate the wave behaviors. In pursuit of the simultaneous robustness and greater control flexibility, as well as the interest in fundamental research, non-Hermitian topological physics <cit.> has received considerable attention and substantial development. Scientists investigate new paradigms <cit.> and explore potential applications in this interdisciplinary territory <cit.>. A coupled system can have two forms of non-Hermiticity. One kind is generated when there is asymmetric interaction between the sites, which leads to the non-Hermitian skin effect <cit.>. The other type, which is caused by on-site loss, can lead to intriguing phenomena associated with the parity-time (PT) symmetry. The PT-symmetric systems have received special attention, because they were proved to have real spectra <cit.>. A sequence of studies have studied the topologically protected bound (defect) states in PT-symmetric topological systems <cit.>, where the defect states are real in the PT-symmetry unbroken phase. Moreover, a number of studies have investigated whether topological edge states exist in the PT-symmetric systems <cit.>, concluding that since the edge state is not an eigenstate of the PT operator, an imaginary eigenvalue is obtained along with the spontaneous PT-symmetry breaking. In this case, a non-Hermitian edge state is obtained. We find that these imaginary edge states in the PT-symmetric system are actually topologically protected by the particle-hole symmetry <cit.>. In the one-dimensional (1D) non-Hermitian PT-symmetric Su-Schrieffer-Heeger (SSH) model <cit.>, the chiral symmetry of the system is broken, losing its topological ℤ invariant, but the particle-hole symmetry of the system is preserved and the system owns a topological ℤ_2 invariant. In the presence of perturbations that do not violate the particle-hole symmetry, the real parts of the eigenvalues of the edge modes remain 0, reflecting the topologically protected characteristics. Under this situation, the topological photonic mode with robust properties can be further manipulated by non-Hermiticity, which is highly desirable for investigating light-matter interactions <cit.>. To investigate the interaction between topological photonic modes and matters <cit.>, we employ the photon-magnon coupling system <cit.>, which has benefits including the flexible tunability and experimental demonstration at room temperature. In this Letter, we use a set of lossy microwave resonators to build 1D non-Hermitian SSH photonic lattices. By coupling a ferromagnetic spin ensemble (FSE) to Hermitian and non-Hermitian SSH chains and monitoring the strength of the coupling between the photonic modes and the magnon mode in the FSE, we verify the topological edge states and bulk states. Non-Hermiticity introduced by the on-site alternating losses breaks the passive PT-symmetry of zero-energy modes and results in two complex-valued edge states, which localize exponentially at the opposite ends of the chain [Fig. <ref>(b)]. Further, the photonic density of state (PDOS) at boundaries is larger than that in the Hermitian case [Fig. <ref>(a)], which strengthens the coupling between the topological photonic mode and the magnon mode. Our experiment demonstrates the potential of manipulating the interaction between topological photonic states and matter by exploiting non-Hermiticity. System and model.—The SSH chain consists of six unit cells [Figs. <ref>(a) and <ref>(b)], in which each unit contains two split-ring-resonators (SRRs) fabricated on the F4B substrate [Fig. <ref>(a)]. In the experiment, the SRR exhibits a resonance at ω_0/2π=5.62 GHz with an intrinsic loss of γ_0/2π=24.42 MHz, and the topological property is unaltered by the uniform losses along the chain <cit.>. Therefore, SRRs with the same loss can be used to build the Hermitian SSH model. Two neighboring SRRs are separated by staggered spacings to realize the intracell and intercell coupling rates, v and w. Edge states appear in the finite chain when the bulk winding number of the Hermitian Hamiltonian is 𝒲_h=1 <cit.>. The effective Hermitian SSH chain is designed in the topological non-trivial phase (v/2π=216.5 MHz, w/2π=341 MHz) and the Hamiltonian is written as <cit.>: ℋ_h/ħ=∑_s=1^2N(ω_0-iγ_0)â_s^†â_s+∑_s=1^2N-2(vâ_sâ_s+1^†+wâ_s+1â_s+2^†), where â_s^† (â_s) is the photon creation (annihilation) operator of the s-th SRR. The uniform losses of the units only yield all eigenvalues of the chain to have the same imaginary component iγ_0. The eigenvalues of the coupled SRRs are plotted in the complex plane, as shown in Fig. <ref>(c). A pair of zero-energy modes (Re(ω_m=6,7)-ω_0=0, green dots) appear in the band gap (gray area), which are the edge modes. The measured transmission spectrum of the chain is shown in Fig. <ref>(d), where the peaks correspond to the resonances of the eigenmodes. By simulating the field distribution at the edge mode frequency of ω_0/2π=5.62 GHz, we find that the electromagnetic field tends to localize at both edges of the chain, as predicted by wave function distribution <cit.>. In the low-frequency region, the measured spectrum [Fig. <ref>(d), solid line] displays an amplitude deviation from that in the high-frequency region. This is due to the residual dissipative coupling between SRRs <cit.>. Then, on-site non-Hermiticity is added to the SSH chain. As depicted in Fig. <ref>(a), resistors R_A=0.1 Ω and R_B=2.7 Ω are integrated into odd and even sites of the chain, respectively, which induce alternated losses of γ_A/2π=36 MHz and γ_B/2π=73 MHz. The Hamiltonian becomes <cit.>: ℋ_nh/ħ= ∑_s∈ X(ω_0-iγ_A)â_s^†â_s+∑_s∈ Y(ω_0-iγ_B)â_s^†â_s +∑_s=1^2N-2(vâ_sâ_s+1^†+wâ_s+1â_s+2^†), where X={1, 3, 5, ..., 2N-1}, Y={2, 4, 6, ..., 2N}, and N=6. The integrated resistors shift ω_0/2π to 5.48 GHz, and the hopping rates shift to v/2π=208.5 MHz, and w/2π=335.5 MHz. The alternated losses make the system a passive PT-symmetric one. The spontaneous PT-symmetry breaking occurs in zero-energy modes, resulting in a splitting of the imaginary parts of zero-energy modes, as shown in Fig. <ref>(e). One with a low loss Im(ω_m=6)/2π=40.42 MHz (Edge_1, blue dot) localizes at the left boundary of the chain, and the other with a high loss Im(ω_m=7)/2π=68.58 MHz (Edge_2, red dot) localizes at the right, as schematically shown in Fig. <ref>(b). The bulk Hamiltonian still preserves the PT-symmetry when δγ/2<|w-v|, and δγ=γ_B-γ_A. In this regime, the topological property is still determined by the generalized integer winding number 𝒲_nh <cit.>. 𝒲_nh=1 guarantees the existence of two non-Hermitian topological edge modes. Experiment results.—To investigate the edge modes engineered by the non-Hermiticity, we measure the PDOS and linewidths of the edge and bulk modes in both Hermitian and non-Hermitian cases. Notably, conventional detection of the PDOS relies on the near-field radiation <cit.>, but in the non-Hermitian situation, the local gain and loss will diminish its reliability. Using the spin ensemble as a probe, we can directly detect the PDOS. In addition, it allows us to study the strong coherent interaction between the topological photonic modes and magnons. In the experiment, the spin ensemble employed to couple with the chain is a 1-mm diameter yttrium iron garnet (YIG) sphere. The magnon mode in the sphere interacts with the local photonic modes, with a coupling strength g proportional to ηχ√(nSħω_r/2V) <cit.>, where η≤1 describes the spatial overlap and polarization matching between the photonic mode and the magnon mode, χ is the gyromagnetic ratio, n is the total number of spins, S=5/2 is the spin number of the ground state Fe^3+ ion in YIG, ω_r is the resonance frequency, and V is the photonic mode volume. Consequently, the square of the coupling strength g^2 directly reflects the PDOS at the coupling location. Firstly, we move the YIG sphere to each site (labeled as s, s=1,2,3,...,12) of the Hermitian chain, and obtain the PDOS distribution of the m-th eigenmode by analyzing the transmission spectra. The bias magnetic field is perpendicular to the device plane, and mappings of transmission spectra are measured versus electromagnet current and probe frequency. Figures <ref>(b) and <ref>(e), for instance, show the mappings when the YIG sphere is placed at site-1 and site-12, respectively. The coupling strength between m-th eigenmode of the chain and the magnon mode at the s-th site is defined as g_m,s, which can be obtained by fitting the level repulsion with: ω_m,s^±=1/2[ω_n+ω_m±√((ω_n-ω_m)+4g_m,s^2)], where ω_n=ω_n-iγ_n and ω_m=ω_m-i(γ_m+κ_m) are the eigenvalues of the uncoupled magnon mode and the m-th eigenmode of the chain, respectively. γ_n is the total loss rate of the magnon mode, γ_m is the intrinsic loss rate of the m-th eigenmode, and κ_m is the extrinsic loss rate of the m-th eigenmode to the input/output ports <cit.>. Coupling strengths between the magnon mode and edge modes (m=6,7) at site-1 and site-12 are obtained by fitting the level repulsion depicted in Figs. <ref>(b) and <ref>(e), which are g_edge,1/2π=g_edge,12/2π=80 MHz. Similarly, coupling strengths between the magnon mode and bulk mode (m=8) at site-1 and site-12 are obtained as g_bulk,1/2π=g_bulk,12/2π=37 MHz. g_m,s^2 as a function of the site index s are illustrated in Figs. <ref>(c) and <ref>(d), denoted by blue (m=8) and red dots (m=6,7), respectively. The observed g_m,s^2 are in good agreement with the intensity distributions for the wave function |φ_m,s|^2 (gray bar diagram). Then, we couple the spin ensemble to the non-Hermitian SSH chain, as shown in Fig. <ref>(a). Figures <ref>(b) and <ref>(e) display the mappingswhen the YIG sphere is placed at site-1 and site-12, respectively. The mappings show similar amount of level repulsion, but reflects very different linewidths of the edge modes. Using Eq. (<ref>), the loss of the edge mode at site-1 is fitted to be γ_edge,1/2π=41.1 MHz, which is contributed by the addition of the two edge modes (m=6,7). The relation is γ_edge,s=[Im(ω_m=6)·|φ_6,s|^2+Im(ω_m=7)·|φ_7,s|^2]/(|φ_6,s|^2+|φ_7,s|^2), and the wave functions of the edge modes |φ_m,s|^2 are displayed as the bar diagram in Fig. <ref>(d). Similarly, we get γ_edge,12/2π=67.9 MHz. More interestingly, the coupling strengths between the magnon mode and edge modes at site-1 and site-12 are observed to be g_edge,1/2π=g_edge,12/2π=112 MHz, which is larger than that in the Hermitian case (80 MHz). We plot g_m,s^2 versus site index s for m=8 and m=6, 7 in Figs. <ref>(c) and <ref>(d), respectively. It can be found that the bulk mode maintains expanded, similar to the Hermitian bulk mode. But, as shown in Fig. <ref>(d), the low-loss edge state (Edge_1) accumulates at the left boundary, while high-loss edge state (Edge_2) accumulates at the right edge. The introduction of on-site loss does contribute to the increase of PDOS at the boundaries. The mechanism can be interpreted as follows: When the PT-symmetry of the edge states is broken, the energy flow between adjacent resonators is partly blocked <cit.>. The low-loss (high-loss) edge state becomes more localized at the low-loss (high-loss) site, as shown in Figs. <ref>(b) and <ref>(a), it corresponds the left (right) boundary of the chain. It is also intriguing to detect the properties of the non-Hermitian topological edge states from spectroscopic measurements. In the PT-symmetry unbroken phase, two topological edge states cannot be distinguished via spectroscopic measurement, as shown in Fig. <ref>(a). The absorptivity spectra A_1 measured when loading microwave to port 1 is totally coincident with A_2 measured when loading microwave to port 2. In the symmetry broken phase, two topological edge states can be distinguished in spectra, as shown in Fig. <ref>(b). The spectra A_1 exhibits the low-loss state with a relatively narrow bandwidth, while the spectra A_2 reveals the high-loss state. Finally, we anticipate to discuss about some additional characteristics of the exceptional point (EP) in the non-Hermitian chain. The dimensionless eigenvalues are defined as β_real+iβ_imag, where β_real=[Re(ω)-ω_0]/(v+w), β_imag=[|Im(ω)|-γ̅]/(v+w), and γ̅=(γ_A+γ_B)/2. In a finite SSH chain, when increasing the non-Hermitian parameter δγ/2(v+w), a series of exceptional points are gradually reached [Figs. <ref>(c) and <ref>(d)]. It can be found that the EP of the edge modes is distinctly away from the EPs of the bulk modes. The edge modes experience spontaneous PT-symmetry breaking (SPTB) at EP_1, where δγ/2(v+w) is only about 0.02. With the increase of chain length, the non-Hermiticity needed for SPTB in edge modes decreases exponentially. In the case of N≫1, any finite δγ will lead to the SPTB in edge modes <cit.>. However, the minimum requirement of SPTB in bulk mode needs δγ/2|w-v|, which is much larger than 0.02. Additional analysis is provided in the supplementary materials. Conclusion.—We have implemented the PT-symmetric non-Hermitian topological SSH model with microwave resonators and achieved the control of topological edge states using the on-site non-Hermiticity. Through spontaneous PT-symmetry breaking, we obtain the non-Hermitian edge modes, where the photonic mode densities are enhanced at both ends of the chain. We realize the strong coupling between the edge modes and the magnon mode in both Hermitian and non-Hermitian cases. We experimentally verify that the coupling strength between the non-Hermitian edge states and the spin ensemble is stronger than that in the Hermitian situation. Our research illustrates non-Hermiticity engineered topological edge states and paves a way for studying strong coherent interaction between topological photonic modes and matter. This work is supported by the National Key Research and Development Program of China (No. 2022YFA1405200), National Natural Science Foundation of China (No. 92265202, No. 11934010, No. U1801661, and No. 12174329), and the Fundamental Research Funds for the Central Universities (No. 2021 FZZX001-02). 99 Burkov-16 A. A. Burkov, Topological semimetals, Nature Materials 15, 1145 (2016). Hasan-10 M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010). Zhaoju-15 Z. Yang, F. Gao, X. Shi, X. Lin, Z. Gao, Y. Chong, and B. Zhang, Topological Acoustics, Phys. Rev. Lett. 114, 114301 (2015). Ma-19 G. Ma, M. Xiao and C. T. Chan, Topological phases in acoustic and mechanical systems, Nat. Rev. Phys. 1, 281 (2019). Yihao-22 H. Xue, Y. Yang, B. Zhang, Topological acoustics, Nature Reviews Materials 7, 974 (2022). Huber-16 S. D. Huber, Topological mechanics, Nat. Phys. 12, 621 (2016). Haldane-08 F. D. M. Haldane and S. Raghu, Possible realization of directional optical waveguides in photonic crystals with broken time-reversal symmetry, Phys. Rev. Lett. 100, 013904 (2008). Wang-09 Z. Wang, Y. Chong, J. D. Joannopoulos, and M. Soljačić, Observation of unidirectional backscattering-immune topological electromagnetic states, Nature 461, 772 (2009). Lu-14 L. Lu, J. D. Joannopoulos, and M. Soljačić, Topological photonics, Nat. Photon. 8, 821 (2014). Ozawa-19 T. Ozawa et al., Topological photonics, Rev. Mod. Phys. 91, 015006 (2019). Blanco-Redondo-18 A. Blanco-Redondo, B. Bell, D. Oren, B. J. Eggleton and M. Segev, Topological protection of biphoton states, Science 362, 568 (2018). Yang-18 B. Yang et al., Ideal Weyl points and helicoid surface states in artificial photonic crystal structures, Science 359, 1013 (2018). Klembt-18 S. Klembt et al., Exciton-polariton topological insulator, Nature, 562, 552 (2018). Feng-17 L. Feng, R. EI-Ganainy, and L. Ge, Non-Hermitian photonics based on parity–time symmetry, Nat. Photon. 11, 752 (2017). EI-Ganainy-18 R. EI-Ganainy et al., Non-Hermitian physics and PT symmetry, Nat. Phys. 14, 11 (2018). Longhi-18 Stefano Longhi, Parity-time symmetry meets photonics: A new twist in non-hermitian optics, Europhysics Letters 120, 64001 (2018). Bender-07 C. M. Bender, Making sense of non-hermitian hamiltonians, Reports on Progress in Physics 70, 947 (2007). Ashida-20 Y. Ashida, Z. P. Gong, and M. Ueda, Non-Hermitian physics, Adv. Phys. 69, 249 (2020). Coulais-21 C. Coulais, R. Fleury, and J. Van Wezel, Topology and broken Hermiticity, Nat. Phys. 17, 9 (2021). Bergholtz-21 E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Exceptional topology of non-Hermitian systems, Rev. Mod. Phys. 93, 015005 (2021). Yao-18 S. Yao and Z. Wang, Edge States and Topological Invariants of Non-Hermitian Systems, Phys. Rev. Lett. 121, 086803 (2018). Yokomizo-19 K. Yokomizo and S. Murakami, Non-Bloch band theory of non-Hermitian systems, Phys. Rev. Lett. 123, 066404 (2019). CHL-20 C. H. Lee, L. Li, R. Thomale, and J. Gong, Unraveling non-Hermitian pumping: Emergent spectral singularities and anomalous responses, Phys. Rev. B 102, 085151 (2020). Helbig-20 T. Helbig et al., Generalized bulk–boundary correspondence in non-Hermitian topolectrical circuits. Nat. Phys. 16, 747 (2020). Xue-20 L. Xiao, T. Deng, K. Wang, G. Zhu, Z. Wang, W. Yi, and P. Xue, Non-Hermitian bulk–boundary correspondence in quantum dynamics, Nat. Phys. 16, 761 (2020). Zhao-19 H. Zhao et al., Non-Hermitian topological light steering, Science 365, 1163 (2019). St-Jean-17 P. St-Jean et al., Lasing in topological edge states of a one-dimensional lattice, Nat. Photon. 11, 651 (2017). Parto-18 M. Parto et al., Edge-Mode Lasing in 1D Topological Active Arrays, Phys. Rev. Lett. 120, 113901 (2018). Hu-21 B. Hu et al., Non-Hermitian topological whispering gallery, Nature 597, 655 (2021). Alvarez-18 V. M. Martinez Alvarez, J. E. Barrios Vargas, and L. E. F. Foa Torres, Non-Hermitian robust edge states in one dimension: Anomalous localization and eigenspace condensation at exceptional points, Phys. Rev. B 97, 121401(R) (2018). Okuma-20 N. Okuma, K. Kawabata, K. Shiozaki, and M. Sato, Topological Origin of Non-Hermitian Skin Effects, Phys. Rev. Lett. 124, 086801 (2020). Bender-98 C. M. Bender and S. Boettcher, Real Spectra in Non-Hermitian Hamiltonians Having PT Symmetry, Phys. Rev. Lett. 80, 5243 (1998). Schomerus-13 H. Schomerus, Topologically protected midgap states in complex photonic lattices, Opt. Lett. 38, 1912 (2013) Malzard-15 S. Malzard, C. Poli, and H. Schomerus, Topologically Protected Defect States in Open Photonic Systems with Non-Hermitian Charge-Conjugation and Parity-Time Symmetry, Phys. Rev. Lett. 115, 200402 (2015). Weimann-17 S. Weimann et al., Topologically protected bound states in photonic parity-time-symmetric crystals, Nat. Mater. 16, 433-438 (2017). Stegmaier-21 A. Stegmaier et al., Topological Defect Engineering and PT Symmetry in Non-Hermitian Electrical Circuits, Phys. Rev. Lett. 126, 215302 (2021). Esaki-11 K. Esaki, M. Sato, K. Hasebe, and M. Kohmoto, Edge states and topological phases in non-Hermitian systems, Phys. Rev. B 84, 205128 (2011). Hu-11 Y. C. Hu and T. L. Hughes, Absence of topological insulator phases in non-Hermitian PT-symmetric Hamiltonians, Phys. Rev. B 84, 153101 (2011). Xue-17 L. Xiao, X. Zhan, Z. H. Bian, K. K. Wang, X. Zhang, X. P. Wang, J. Li, K. Mochizuki, D. Kim, N. Kawakami, W. Yi, H. Obuse, B. C. Sanders, P. Xue, Observation of topological edge states in parity–time-symmetric quantum walks, Nature Physics 13, 1117 (2017). Cheng-22 D. Cheng et al., Truncation-dependent PT phase transition for the edge states of a two-dimensional non-Hermitian system, Phys. Rev. B 105, L201105 (2022). SM See Supplementary Materials at ... for device details, Hamiltonian and topological invariant analysis, additional transmission mappings, and the experimental measurement details, which includes Refs. <cit.>. Su-79 W. P. Su, J. R. Schrieffer and A. J. Heeger, Solitons in Polyacetylene, Phys. Rev. Lett. 42, 1698 (1979). Gutzler-21 R. Gutzler, M. Garg, C. R. Ast, K. Kuhnke, and Kern, K. Light–matter interaction at atomic scales, Nat. Rev. Phys. 3, 441 (2021). Ruggenthaler-18 M. Ruggenthaler, N. Tancogne-Dejean, J. Flick, H. Appel, and A. Rubio, From a quantum-electrodynamical light–matter description to novel spectroscopies, Nat. Rev. Chem. 2, 0118 (2018). Kockum-19 A. F. Kockum, A. Miranowicz, S. De Liberato, S. Savasta, and F. Nori, Ultrastrong coupling between light and matter, Nat. Rev. Phys. 1, 19 (2019). Kim-21 E. Kim et al., Quantum Electrodynamics in a Topological Waveguide, Phys. Rev. X 11, 011015 (2021). Huebl-PRL-2013 H. Huebl, C. W. Zollitsch, J. Lotze, F. Hocke, M. Greifenstein, A. Marx, R. Gross, and S. T. B. Goennenwein, High Cooperativity in Coupled Microwave Resonator Ferrimagnetic Insulator Hybrids, Phys. Rev. Lett. 111, 127003 (2013). Tabuchi-PRL-2013 Y. Tabuchi, S. Ishino, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Hybridizing Ferromagnetic Magnons and Microwave Photons in the Quantum Limit, Phys. Rev. Lett. 113, 083603 (2014). Zhang-PRL-2014 X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, Strongly Coupled Magnons and Cavity Microwave Photons, Phys. Rev. Lett. 113, 156401 (2014). Tobar-PRApp-2014 M. Goryachev, W. G. Farr, D. L. Creedon, Y. Fan, M. Kostylev, and M. E. Tobar, High-Cooperativity Cavity QED with Magnons at Microwave Frequencies, Phys. Rev. Applied 2, 054002 (2014). You-npj-2015 D. Zhang, X.-M. Wang, T.-F. Li, X.-Q. Luo, W. Wu, F. Nori, J. Q. You, Cavity quantum electrodynamics with ferromagnetic magnons in a small yttrium-iron-garnet sphere, npj Quantum Information 1, 15014 (2015). Wang-2019 Y.-P. Wang, J. W. Rao, Y. Yang, P.-C. Xu, Y. S. Gui, B. M. Yao, J. Q. You, and C.-M. Hu, Nonreciprocity and Unidirectional Invisibility in Cavity Magnonics, Phys. Rev. Lett. 123, 127202 (2019). Wang-2020 Y.-P. Wang and C.-M. Hu, Dissipative couplings in cavity magnonics, Journal of Applied Physics 127, 130901 (2020). Rameshti-22 B. Z. Rameshti, S. V. Kusminskiy, J. A. Haigh, K. Usami, D. Lachance-Quirion, Y. Nakamura, C. Hu, H. X. Tang, G. E. W. Bauer and Y. M. Blanter, Cavity Magnonics, Physics Reports 979, 1-60 (2022). Yuan-22 H. Y. Yuan, Y. Cao, A. Kamra, P. Yan, and R. A. Duine, Quantum magnonics: when magnon spintronics meets quantum information science, Physics Reports 965, 1 (2022). Bellec-13 M. Bellec, U. Kuhl, G. Montambaux, and F. Mortessagne, Tight-binding couplings in microwave artificial graphene, Phys. Rev. B 88, 115437 (2013). Peng-14 B. Peng, Ş. K. Özdemir, F. Lei, F. Monifi, M. Gianfreda, G. Long, S. Fan, F. Nori, C. M. Bender and L. Yang, Parity-time-symmetric whispering-gallery microcavities, Nat. Phys. 10, 394 (2014).
http://arxiv.org/abs/2307.05633v1
20230711074839
Transaction Fraud Detection via an Adaptive Graph Neural Network
[ "Yue Tian", "Guanjun Liu", "Jiacun Wang", "Mengchu Zhou" ]
cs.LG
[ "cs.LG" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Transaction Fraud Detection via an Adaptive Graph Neural Network Yue Tian, Guanjun Liu, Senior Member, IEEE, Jiacun Wang, Senior Member, IEEE, and Mengchu Zhou, Fellow, IEEE This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Yue Tian and Guanjun Liu are with Department of Computer Science, Tongji University, Shanghai 201804, China (e-mail: [email protected]; [email protected]). Jiacun Wang is with the Department of Computer Science and Software Engineering, Monmouth University, W. Long Branch, NJ 07764, USA (e-mail: [email protected]). Mengchu Zhou is with the Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102 USA (e-mail: [email protected]). ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Many machine learning methods have been proposed to achieve accurate transaction fraud detection, which is essential to the financial security of individuals and banks. However, most existing methods leverage original features only or require manual feature engineering. They lack the ability to learn discriminative representations from transaction data. Moreover, criminals often commit fraud by imitating cardholders' behaviors, which causes the poor performance of existing detection models. In this paper, we propose an Adaptive Sampling and Aggregation-based Graph Neural Network (ASA-GNN) that learns discriminative representations to improve the performance of transaction fraud detection. A neighbor sampling strategy is performed to filter noisy nodes and supplement information for fraudulent nodes. Specifically, we leverage cosine similarity and edge weights to adaptively select neighbors with similar behavior patterns for target nodes and then find multi-hop neighbors for fraudulent nodes. A neighbor diversity metric is designed by calculating the entropy among neighbors to tackle the camouflage issue of fraudsters and explicitly alleviate the over-smoothing phenomena. Extensive experiments on three real financial datasets demonstrate that the proposed method ASA-GNN outperforms state-of-the-art ones. Graph neural network, transaction fraud, weighted multigraph, attention mechanism, entropy. § INTRODUCTION Online transaction is a popular and convenient way of electronic payment. It also increases the incidences of financial fraud and causes massive monetary losses to individuals and banks. The global losses reached 25 billion dollars in 2018 and have kept increasing <cit.>. According to statistics from Nilson Report, the losses jumped to 28.65 billion in 2020 <cit.>. Financial institutions have taken measures to prevent fraud. In traditional methods, online transactions are checked against some expert rules, and then suspicious transactions are fed to a detection model. The task of the detection model is to mine fraud patterns (represented as some rules) from sizeable historical transaction data so that the model can find transactions that match these rules. However, the ability of these rules is limited and hardly adapts to the fast changes in fraud patterns. How to quickly mine and represent as many fraud patterns as possible and fit their changes is complicated since fraudsters and detectors of fraud transactions have kept a dynamic gaming process for a long time <cit.>. Transaction records often contain transaction-related elements, such as location, date, time, and relations. Although there are machine learning-based methods to detect fraudulent transactions, most require manual feature engineering based on the above elements and the construction of supervised classifiers <cit.>. These methods fail to automatically detect fraud patterns and express important behavior information <cit.>. On the one hand, there are many interactions among transactions <cit.>. On the other hand, the transaction behaviors of users are dynamic <cit.>. Hence, designing a more discriminative representation framework for transaction fraud detection remains a big challenge. The camouflage of fraudsters is another challenge that causes performance degradation and poor generalization of many detection approaches <cit.>. Recently, graph neural networks (GNNs) have been used to learn representations automatically for some prediction tasks <cit.>. In contrast to traditional machine learning methods, GNN utilizes a neighborhood aggregation strategy to learn representations and then uses a neural network for node classification and link prediction <cit.>. These methods can capture rich interactions among samples and avoid feature engineering <cit.>. However, learning discriminative representations by directly applying these graph techniques to our transaction fraud detection problem is challenging. They ignore the relationship among features and the dynamic changes in cardholders’ behaviors. We visualize the representations of the general GNN models, including GraphSAGE and GCN. As shown in Figs. 1 and 2, they fail to distance the fraudulent transactions from the legitimate ones. Moreover, GNNs face an over-smoothing problem (indistinguishable representations of nodes in different classes), which results from the over-mixing of information and noise <cit.>. In the applications of transaction fraud detection, the fact that fraudulent nodes are connected to legitimate ones by disguising the cardholders' behaviors <cit.>, as shown in Fig. 3, exacerbates the effects of the over-smoothing issue. To tackle the above problem, we propose an Adaptive Sampling and Aggregation-based GNN for transaction fraud detection, named ASA-GNN. It integrates our newly proposed Adaptive Sampling and Aggregation methods. First, we use raw transaction records to construct the transaction graph, which considers the relationship of features and the dynamic changes in cardholders' behaviors. Based on it, we design a sampler to filter as many noisy neighbors as possible while retaining structural information. Cosine similarity and edge weight are used to select similar neighbor nodes. Then, we over-sample these neighbor nodes to tackle the need for more links among fraudulent nodes. To deal with the camouflage issue of fraudsters, a neighbor diversity metric is defined and calculated based on the entropy among neighbor nodes to distinguish whether neighborhood aggregation is harmful. Each node has its neighborhood aggregation degree. As a result, intraclass compactness and interclass separation can be guaranteed. This work aims to make the following new contributions: * We propose a graph neural network that learns discriminative representations to improve the performance of transaction fraud detection. * We propose a new sampling strategy to filter noisy nodes and capture neighbors with the same behavior pattern for fraudulent nodes based on the distance between two nodes measured by cosine similarity and edge weight. * We define a neighbor diversity metric to make each node adaptive in its aggregation process, which handles the camouflage issue of fraudsters and alleviates the over-smoothing phenomena. * Extensive experiments conducted on three financial datasets show that the proposed ASA-GNN achieves significant performance improvements over traditional and state-of-the-art methods. The rest of this paper is organized as follows. Section II presents the related work. Section III describes the proposed ASA-GNN. Section IV presents three real datasets and discusses the experimental results of performance comparison, ablation studies, and parameter sensitivity analysis. Section V concludes the paper. § BACKGROUND AND RELATED WORK §.§ Transaction Fraud Detection Model Researchers have proposed many methods based on expert rules and machine learning in many fields including transaction fraud detection tasks, which have achieved much success <cit.>. Their core is to learn some information from historical data to detect fraudulent transactions automatically. They have been proven effective for known fraud patterns but cannot deal with unknown fraud types <cit.>. Experts have started to use deep learning methods to solve it. Therefore, according to the correlations among the transaction features, deep learning methods can automatically capture cross-feature relationships so that these transaction records can be accurately portrayed, which helps detect fraud behaviors <cit.>. The Convolutional Neural Network (CNN) method is one of the commonly used methods <cit.>. In addition to the relationship among transaction features, existing feature engineering methods extract the association of transaction records to improve performance <cit.>. The aggregation strategy is a classical feature engineering method for transaction fraud detection. It groups the transaction records according to the specified time and then extracts the amount-related features and numbers of these records as the aggregation features <cit.>. Location and merchant code are also considered and used to generate aggregation features, increasing the user's periodic behavior information <cit.>. Moreover, some methods, such as a Recurrent Neural Network (RNN) method <cit.>, start to explore the dynamic information of transactions versus time <cit.>. A Long Short-Term Memory (LSTM) network relies on the evolution of data distribution versus time to capture the dynamic information <cit.>. By considering the small and various changes in data distribution, an ensemble method is proposed to achieve a better performance <cit.>. To comprehensively focus on the various mentioned relationships, researchers have utilized transaction records to construct a graph <cit.>. For example, GNN can capture the relationships among transactions and achieve better performance in fraud detection. However, since this method uses only one feature to construct a sparse graph, it fails to mine many useful fraud features <cit.>. Most of the mentioned approaches fail to comprehensively consider the relationship among transactions, the relationship among the transaction features, and dynamic change information of cardholders' behaviors. In our previous work <cit.>, we constructed a weighted multigraph to tackle the challenge since it can use multiple features as long as logic propositions can represent them. Based on this weighted multigraph, we use GNN to extract the above relationships and the dynamic changes. However, the method has some shortcomings, as stated in Section I. This work is motivated by the need to overcome such shortcomings. §.§ GNN A GNN is an effective framework that can learn graph representations by modeling the relationships of non-Euclidean graph data <cit.>. The concept is initially outlined in <cit.>, where a convolution operation in image processing is used in the graph data processing. After that, several graph-based methods are proposed and applied. Most earlier algorithms obtain the embedding representations in two steps: 1) Obtain a sequence of neighbor nodes for each node by using a random walk strategy; 2) Use machine learning models to gain the topological structure information of a graph and obtain the representation for each node. Although the topological structure information is investigated, these algorithms ignore the attributes of nodes. Some graph methods utilize the attributes of nodes based on text and statistical information <cit.>. For example, a Graph Convolutional Network (GCN) method leverages spectral graph convolutions to extract feature information <cit.>. A Graph Attention Network (GAT) method specifies the contribution of different neighbors by an attention mechanism <cit.>. GraphSAGE can sample and aggregate neighbor nodes to update the embedding of nodes flexibly<cit.>. A Relational Graph Convolutional Network (RGCN) method can model relational data <cit.>. Recent studies have handled large-scale graphs and overcome the problem of computational complexity <cit.>. Considering that a heterogeneous graph contains a large number of types of nodes and edge information, as well as their changes over time, researchers have proposed heterogeneous GNNs <cit.>, and dynamic GNNs <cit.>. In addition, the interpretability and structural optimization of GNNs are also studied <cit.>. However, applying the above GNNs to our transaction fraud detection problem fails to utilize all possible features to construct a graph. The graph built in this way may lack much vital information. A competitive graph neural network (CGNN) method <cit.> utilizes a heterogeneous graph to model normal and fraudulent behaviors in eCommerce. By a 3D convolutional mechanism, a spatial-temporal attention-based graph network (STAGN) method <cit.> can detect fraudulent transactions from a location-based transaction graph. MAFI utilizes aggregator-level and relation-level attention to learn neighborhood information and different relation <cit.>. LGM-GNN uses local and global information to learn more discriminative representations for prediction tasks <cit.>. Although these methods have tried to learn more discriminative representations, they ignore that excessive aggregation makes indistinguishable representations of nodes in different classes. It results in the over-smoothing phenomenon in GNNs <cit.>. In the applications of transaction fraud detection, fraudsters often disguise cardholders' behaviors by various means. Thereby, there are some edges among fraud nodes and legitimate ones <cit.>. It exacerbates the influence of the over-smoothing phenomenon. Facing the camouflage issue of fraudsters, the CAmouflage-REsistant GNN (CARE-GNN) method <cit.> defines a label-aware similarity measure using the l_1-distance to select neighbors. However, it only focuses on the similarity of labels. l_1-distance loses its effectiveness in high-dimensional space and fails to describe the similarity among complex behaviors. In our previous work <cit.>, we measure the distance using cosine similarity, which makes up for the shortcoming of l_1-distance. The TG constructed according to transaction data in <cit.> can focus on the relationship among dynamic transaction behaviors, static transaction attributes, and transactions themselves. However, it ignores that fraudsters avoid trades with others to cover up their behaviors, which results in the lack of links among fraudulent nodes. Meanwhile, it fails to tackle the issue of over-smoothing. In this work, we utilize cosine similarity and edge weight to remedy the mentioned flaws and focus on estimating whether neighborhood aggregation is harmful to solving the over-smoothing issue. § PROPOSED APPROACH This section first describes the preliminaries in the field of transaction fraud detection. After that, ASA-GNN is described in detail. Important notations are listed in Table 1. §.§ Preliminary Transaction Record. A transaction record r consists of l attributes and a label y ∈{0, 1}. Transaction Graph (TG). A transaction graph is a weighted multigraph 𝒢=(𝒱, ℛ, 𝒫,𝒲eight, ℰ), where 1. 𝒱 is the set of |ℛ| nodes and each node v denotes a record r ∈ℛ; 2. 𝒫 is the set of m logic propositions that are assertions with respect to the attributes of transaction records and m≥ 1; 3. 𝒲eight: 𝒫→ℕ is a weight function; and 4. ℰ=⋃_i=1^m{(a,b)_w^p_i|a∈𝒱∧ b∈𝒱∧ a≠ b∧ p_i(a,b)=True∧ w=𝒲eight(p_i)}; The logic propositions are based on expert rules such that TG ensures the effectiveness of features and reflects the dynamic changes in cardholders' behaviors. In <cit.>, we have defined TG. In comparison with <cit.>, the biggest contributions of this paper lie in the adaptive sampling and aggregation methods, which allow us to utilize GNN to learn the discriminative representations. Assume that the underlying graph of a GNN is 𝒢={𝒱,ℰ}. GNNs aim to learn the embedding representations for all nodes and a mapping function such that the predicted results can be obtained. In a general GNN framework <cit.>, the update function for a single layer is as follows: h_v^k=σ (𝒲^k (h_v^k-1⊕𝔸^k ({ h_v'^k-1: v' ∈𝒩_v }) )), where h_v^k, σ and 𝒲^k represent the embedding representation of node v, activation function, and shared parameter matrix at k-th layer respectively. Given a node v, 𝒩_v represents the set of its neighbor nodes, 𝔸^k denotes an aggregator function at the k-th layer which can aggregate the rich information from 𝒩_v and ⊕ is the operator that combines the embedding of v and its neighbor nodes. General aggregator functions include a mean aggregator and a pooling one. The mean aggregator is defined as: 𝔸^k= 1/||𝒩_v||∑_v' ∈𝒩_vh_v'^k-1. After the K layers' learning, we utilize a classification layer to predict the samples' labels: ŷ_̂v̂=Softmax(𝒲^Kh_v^K). In the field of fraud detection, we first construct a TG by transaction records ℛ={r_1,r_2,...,r_|ℛ|}, and then we train the GNN to get the nodes' embedding representations at the last layer and apply a classification layer to predict whether the transaction is fraudulent. §.§ ASA-GNN The framework of ASA-GNN is illustrated in Fig. 4. Its main components are neighbor sampling and neighborhood aggregation. In the neighbor sampling stage, to filter noisy neighbors and retain structural information, we define a novel neighbor sampling policy based on cosine similarity and edge weight. In the neighborhood aggregation process, the contributions of different neighbors are specified by an attention mechanism. After that, we apply a diversity metric to ensure the aggregation degree. Finally, a softmax function calculates the probability of a transaction to predict whether it is fraud. All details are described as follows. §.§.§ Neighbor Sampling Strategy Based on Cosine Similarity and Edge Weight Simply put, GNNs leverage the information of neighbors to learn more discriminative representations. Existing studies, such as GraphSAGE <cit.>, adopt a random sampling policy under a homophily assumption. However, they ignore the quality of the information from neighbor nodes. Some useless neighbor nodes around a target node result in indistinguishable representations of nodes in different classes. In addition, similar neighbor nodes may provide rich information. Therefore, selecting valid neighbors is necessary before aggregation. The distance between two nodes and the weight of the edge connecting them are considered to make a novel neighbor sampling policy to deal with this problem. Given a node v ∈𝒱 and its neighbor v' ∈𝒱, we utilize cosine similarity to compute their distance, which is usually used to analyze user behaviors in practical applications. We calculate the distance between v and v', i.e., v,v'= exp(r_v'· r_v), where r_v' and r_v' are the normalized attribute vectors of nodes v and v'. We utilize the exponential function to ensure non-negative similarity. Note that there may be multiple edges between two nodes in a TG, and the weight of each edge is assigned. The most significant weight of the edge between v and v' is computed as follows: w_v,v'=max{μ_i · Weight(p_i)}_i ∈{1,⋯,m}, where μ_i={[ 1 ; 0 . ]. Finally, given a node v ∈𝒱, the probability of its neighbor v' being selected is defined as ℙ_v,v'= w_v,v'·v,v'/∑_v' ∈𝒩_v, v ≠ v'w_v,v'·v,v', where 𝒩_v denotes the set of neighbor nodes of v. We perform Top-ẑ neighbor sampling to filter noise information from useless neighbor nodes. After the above neighbor sampling, 𝒩_v^' contains the selected neighbor nodes of v. However, fraudulent nodes still need neighbors to enrich information. We should find nodes with the same behavior pattern for them. For this purpose, we over-sample neighbors for fraudulent node v as follows: 𝒩_v^f ={v'∈𝒱| v' ∉𝒩'_v ∧ c_v'=1 ∧v,v'<d_f}, where v,v' is the distance between nodes v and v' calculated by Eq. (4). Therefore,if v is fraudulent, the set of its neighbors can be defined as follows: 𝒩_v ={𝒩'_v ∪𝒩_v^f}. If v is legitimate, the set of its neighbors 𝒩_v can be updated by 𝒩'_v. §.§.§ Attention Mechanism After the neighbor sampling process, 𝒩_v contains the selected neighbor nodes of v. Then an aggregator function can generate the embedding representations of v at each layer. Given a node v, h_v^k denotes the representation at the k-th layer where v ∈𝒱 and k=1,2,...,K. Then it aggregates the information from 𝒩_v, which is the set of selected neighbor nodes, i.e., h_𝒩_v^k= α_v,v'^k ·𝔸^k(h_v'^k-1, ∀ v' ∈𝒩_v), α_v,v'^k=exp(LeakyReLU(e_v^v'))/∑_i ∈𝒩_vexp(LeakyReLU(e_v^i)), e_v^v'=f(𝒲^kh_v^k || 𝒲^kh_v'^k), where α_v,v'^k denotes the attention score of v and v' at the k-th layer, 𝔸^k is an aggregator function at the k-th layer, LeakyReLU is an activation function, f is a function mapping the high-dimensional feature to a real number and 𝒲^k is a shared parameter matrix. Generally, the interaction between two transaction records within a short interval is more important. Therefore, given a node v and its neighbor v', the attention score between them at the k-th layer is adjusted by the normalised time interval {δ t_v,v', ∀ v' ∈𝒩_v}, i.e., α_v, v'^k = δ t_v,v'·exp(LeakyReLU(e_v^v'))/∑_i ∈𝒩_vexp(LeakyReLU(e_v^i)). §.§.§ Adaptive Neighborhood Aggregation Over-smoothing is a common problem in GNN methods. Existing methods assume that the introduction of noise in the aggregation process causes the problem. Specifically, the information from neighbor nodes in the same class makes the representations maintain compactness within the class, which reflects the advantages of GNN. Interactions between a target node and its neighbors in different classes may result in indistinguishable representations. Although neighbor sampling can help us filter some noisy nodes, the camouflage issue of fraudsters brings another challenge. In applications of transaction fraud detection, fraudsters often disguise the cardholders’ behaviors by various means so that there exist edges connecting fraud nodes and legitimate ones in a TG. It exacerbates the effect of the over-smoothing issue. Therefore, when a node has a neighbor in a different class, we should consider that the neighbor may be noisy. We introduce a neighbor diversity metric 𝒟 by computing the entropy among neighbors of a target node, i.e., 𝒟(v)=-∑_c ∈ CP_c(v)log(P_c(v)), P_c(v)=|v' ∈𝒩_v|y_v'∈ c|/|𝒩_v|, where C represents the set of label classes, including legitimate and fraudulent ones. y_v' is the label of v'. The greater the value of 𝒟(v) is, the more diverse the neighbors of v are. Considering that each node has a 𝒟, we use a gating function to control the aggregation degree, i.e., g_v^k=σ(-Norm(𝒟(v))), ∀ v ∈𝒱, where Norm is the batch normalization for all nodes in a TG. The range of g_v^k is (0, 1). When plenty of noisy neighbors are connected to target node v, it is very small and close to 0. Using the gating function, we allow each node to have its neighborhood aggregation degree. To better understand our adaptive neighborhood aggregation process, the interaction operations of a target node and its neighbors are described, as shown in Fig. 5. The update function for a single layer is as follows: h_v^k=σ(𝒲^k· concat (h_v^k-1,g_v^kh_𝒩_v^k)). §.§.§ Detection Layer For the target node v, h_v^K is the final representation outputted by the K-th layer. After that, a softmax function can be applied to estimate the probability of a transaction being fraudulent. The loss function is computed as follows: ℒ= ∑_i^|ℛ|-[y_i · log(ŷ_̂î)+(1-y_i) · log(1-ŷ_̂î)], where y_i and ŷ_̂î are the labels of the i-th transaction record, and the possibility that the sample is predicted to be fraudulent, respectively, and |ℛ| represents the number of transactions. The training process of ASA-GNN is illustrated in Algorithm 1. Given a multigraph 𝒢, we first compute the selection probability and then sample ẑ neighbors for each node. Then we can compute attention score α_u,v^k and aggregation degree g_v^k. Finally, the representation for each node at the k-th layer can be obtained by utilizing an aggregator. § EXPERIMENTS Based on three real-world financial datasets, we conduct the following experiments to show the advantages of ASA-GNN. §.§ Datasets and Graph Construction §.§.§ Datasets We conduct experiments on one private dataset and two public datasets to demonstrate that ASA-GNN achieves significant improvements over both classic and state-of-the-art models for transaction fraud detection tasks. The private dataset, PR01, consists of 5.133.5 million transactions from a financial company in Chinathat took place during the second quarter of 2017. Transactions are labeled by professional investigators of a Chinese bank, with 1 representing fraudulent transactions and 0 representing legitimate ones. In data preprocessing, we first utilize the down-sampling of legitimate transactions to solve the imbalanced problem. Then, we apply one-hot coding and min-max normalisation to handle the discrete and continuous values, respectively. Since CARE-GNN requires a lot of computing resources, we take the latest 10000 transaction records as a small dataset (PR02) to facilitate the test. The TC dataset[https://challenge.datacastle.cn/v3/] contains 160,764 transaction records collected by Orange Finance Company, including 44,982 fraudulent transactions and 115,782 legitimate transactions. According to the trade time of these transaction records, the training and test sets are divided. Transaction records of one week form the training set and transaction records of the next week form the test set. In this way, the TC dataset is split into TC12, TC23, and TC34. We perform the same data processing as for the PR01 and PR02 datasets. The XF dataset is a subset extracted from iFLYTEK[http://challenge.xfyun.cn/2019/gamedetail?type=detail/mobileAD] which have 20000 records. It contains five types of information, including basic data, media information, time, IP information, and device information. The XF dataset is balanced. Therefore, We only perform the same data processing as the datasets PR01 and PR02 to handle the discrete and continuous values. §.§.§ Graph Construction To construct the TG, the transactions are regarded as nodes. Then, we utilize some logic propositions to design the edges. Generally, fraudsters often have two characteristics: device aggregation and temporal aggregation. Device aggregation means that fraudsters are often limited by financial and regulatory constraints and commit fraud on a small number of devices. It differs from legitimate transactions, where cardholders trade on different devices. Temporal aggregation means that fraudsters must complete the fraud activities as quickly as possible since the banks and cardholders may otherwise discover their activities. Therefore, we construct a TG for the private dataset using two logic propositions as follows: p_1(a,b)={[ True ; ; ; False . ]. p_2(a,b)={[ True ; ; ; ; False , ]. where Trade_ip, Trade_time, and Trade_mac are the Internet Protocol address, time and Media Access Control address of the transactions, respectively. §.§ Baselines To verify the effectiveness of ASA-GNN, the general GNN models and state-of-the-art GNN-based fraud detectors are selected for comparison. The general GNN models includes GCN <cit.>, GraphSAGE <cit.>, GAT <cit.>, RGCN <cit.> and HAN <cit.>. The state-of-the-art GNN-based fraud detectors include CARE-GNN <cit.> and SSA <cit.>. * GCN <cit.>: The GCN method leverages spectral graph convolutions to extract feature information. * GraphSAGE <cit.>: GraphSAGE can get a representation for each node using an update function which includes a random sampling policy and neighborhood aggregation process. * GAT <cit.>: The GAT method uses graph attention layers to specify the importance of different neighbors. * CARE-GNN <cit.>: It is a GNN method applied to fraud detection, which improves its aggregation with reinforcement learning to identify the behavior of fraudsters. * Similar-sample + attention SAGE (SSA) <cit.>: The SSA method improves the performance of a model using a sampling strategy and an attention mechanism. * RGCN <cit.>: RGCN models a relational GNN for link prediction tasks and classification tasks. * HAN <cit.>: HAN utilizes a hierarchical attention mechanism so that the contributions of different neighbors and meta-paths can be learned. §.§ Parameter Settings In ASA-GNN, we set K=3 as the number of layers, (20, 20, 20) as the neighborhood sample size, 32 as the hidden size, 0.001 as the learning rate, Adam as the optimizer and 256 as the batch size for our PR01 and PR02 datasets. We set K=3 as the number of layers, (30, 50, 50) as the neighborhood sample size, 16 as the hidden size, 0.01 as the learning rate, Adam as the optimizer and 128 as the batch size for the XF, TC12, TC23, and TC34 datasets. For all baseline algorithms , their parameters are the same as those in the corresponding papers<cit.>. §.§ Evaluation Criteria To measure the performance, we choose Recall, F_1, and Area Under the Curve of ROC (AUC) as criteria. Recall represents the ratio of the identified fraudulent transaction records to all fraudulent ones. F_1 is a common evaluation criteria in binary classification problems <cit.>. AUC is usually computed to evaluate a model on an imbalanced dataset. Recall and F_1 are calculated as follows: Recall= T_P/T_P+T_N, Precision= T_P/T_P+F_P, F_1= 2 × Recall × Precision/Recall+Precision, where T_P, T_N, and F_P are the numbers of true positive transaction records, true negative transaction records, and false positive transaction records, respectively. AUC is calculated as follows: AUC= ∑_r ∈ℛ^+rank_r- |ℛ^+| × (|ℛ^+|+1)/2/ |ℛ^+| × |ℛ^-| , where ℛ^+ and ℛ^- are the fraudulent and legitimate class sets and rank_r is the rank of r by the predicted score. §.§ Performance Comparison The performance of ASA-GNN and all baselines are presented in Table. 3. The ROC curves of ASA-GNN and all baselines are shown in Fig. 8. We have the following observations and analysis results: * The proposed ASA-GNN achieves significant improvements over all baselines on the PR01, PR02, XF, TC12, and TC34 datasets. ASA-GNN improves significantly by 6.8% and 6.7% in terms of F_1 and AUC on the TC12 dataset. Therefore, the overall performance demonstrates the superiority of the proposed ASA-GNN. * GCN, GraphSAGE, GAT, RGCN, and HAN are traditional GNNs, neither of which can identify the camouflage behavior of fraudsters. Thus, their performance is worse than ASA-GNN. * GraphSAGE, CARE-GNN, and SSA are all graph algorithms based on node sampling. None of them performs better than ASA-GNN. The reason is that the proposed ASA-GNN filters nodes effectively and supplements the information of minority nodes, i.e., fraud information. In addition, ASA-GNN considers the camouflage behaviors of fraudsters. The performance of GraphSAGE is worse than that of SSA because noise information may be absorbed in the former's sampling process, and the importance of different nodes needs to be considered. * CARE-GNN calculate the l_1-distance between nodes. However, it only focuses on the similarity of labels and the l_1-distance loses its effectiveness in high-dimensional space. Although it tries its best to solve the camouflage issue of fraudsters, it still performs poorly. § CONCLUSION AND FUTURE WORK In this paper, a novel graph neural network named ASA-GNN is proposed to identify fraudulent transactions. ASA-GNN employs the neighbor sampling strategy to filter noisy nodes and make up for the lack of neighbors of fraudulent nodes. Consequently, it can make full use of attribute and topology information in TGs. Besides, ASA-GNN can address the camouflage issue of fraudsters and alleviate the over-smoothing phenomena, benefiting from our neighbor diversity metric. Extensive experiments on three financial datasets show that the proposed ASA-GNN achieves significant performance improvements over traditional and state-of-the-art methods. Therefore, ASA-GNN can better help banks and financial institutions detect fraudulent transactions and establish trust relationships with customers. Our plan includes designing an explainer for the detection model produced by ASA-GNN since the lack of explanations may make customers distrust financial institutions <cit.>. Our TG is built based on expert rules, which can provide a feasible way to develop such an explainer. Studying the imbalance issues in transaction graphs and adding temporal modules (TCN/Transformer) to improve the ability to capture temporal features is also interesting. § REFERENCES SECTION You can use a bibliography generated by BibTeX as a .bbl file. BibTeX documentation can be easily obtained at: http://mirror.ctan.org/biblio/bibtex/contrib/doc/ The IEEEtran BibTeX style support page is: http://www.michaelshell.org/tex/ieeetran/bibtex/ § SIMPLE REFERENCES You can manually copy in the resultant .bbl file and set second argument of \begin to the number of references (used to reserve space for the reference number labels box). 1 IEEEtran ref1 Mathematics Into Type. American Mathematical Society. [Online]. Available: https://www.ams.org/arc/styleguide/mit-2.pdf ref2 T. W. Chaundy, P. R. Barrett and C. Batey, The Printing of Mathematics. London, U.K., Oxford Univ. Press, 1954. ref3 F. Mittelbach and M. Goossens, The Companion, 2nd ed. Boston, MA, USA: Pearson, 2004. ref4 G. Grätzer, More Math Into LaTeX, New York, NY, USA: Springer, 2007. ref5M. Letourneau and J. W. Sharp, AMS-StyleGuide-online.pdf, American Mathematical Society, Providence, RI, USA, [Online]. Available: http://www.ams.org/arc/styleguide/index.html ref6 H. Sira-Ramirez, “On the sliding mode control of nonlinear systems,” Syst. Control Lett., vol. 19, pp. 303–312, 1992. ref7 A. Levant, “Exact differentiation of signals with unbounded higher derivatives,” in Proc. 45th IEEE Conf. Decis. Control, San Diego, CA, USA, 2006, pp. 5585–5590. DOI: 10.1109/CDC.2006.377165. ref8 M. Fliess, C. Join, and H. Sira-Ramirez, “Non-linear estimation is easy,” Int. J. Model., Ident. Control, vol. 4, no. 1, pp. 12–27, 2008. ref9 R. Ortega, A. Astolfi, G. Bastin, and H. Rodriguez, “Stabilization of food-chain systems using a port-controlled Hamiltonian description,” in Proc. Amer. Control Conf., Chicago, IL, USA, 2000, pp. 2245–2249. IEEEtran
http://arxiv.org/abs/2307.07537v1
20230714130307
Measurements of top-quark production cross sections with the ATLAS detector
[ "Miguel Angel Principe Martin" ]
hep-ex
[ "hep-ex" ]
#1 #1 #1#2 #1 #2 #1 #1 and #1 Submitted to #1 Abstract Presented PRESENTED AT Measurements of top-quark production cross sections with the ATLAS detector M. A. Principe Martin,On behalf of the ATLAS Collaboration Universidad Autónoma de Madrid (Spain) The Large Hadron Collider (LHC) produces a vast sample of top-quark pairs and single-top quarks. Measurements of the inclusive top-quark production rates at the LHC have reached a precision of several percent and test advanced next-to-next-to-leading-order predictions in QCD. Measurements of production cross sections test the Standard Model predictions and help to improve the Monte Carlo models. In this contribution, comprehensive measurements of top-quark-antiquark pair and single-top quark production are presented; the measurements use the data recorded by the ATLAS experiment in the years 2015-2018 during Run 2 of the LHC. A recent result from the 5 TeV operation of the LHC is also included. In addition, a first look into top-quark pair production in Run 3 data at 13.6 TeV is also presented. DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 < g r a p h i c s > 0.5ex© 2023 CERN for the benefit of the ATLAS Collaboration. Reproduction of this article or parts of it is allowed as specified in the CC-BY-4.0 license. 1. Introduction Top quarks are predominantly produced in pp collisions in pairs via QCD and singly via electroweak (EW) interactions. The Large Hadron Collider (LHC) can be considered as a top factory, which provides high-statistics data samples used to test the Standard Model (SM) and search for new phenomena. Electroweak tests are done using single-top quark production while measurements of tt̅ production allow tests of QCD at the highest accessible energy scales. For the tt̅ production, next-to-next-to-leading-order plus next-to-next-to-leading-logarithm (NNLO+NNLL) predictions are available and the measurements have been used to constrain the Parton Distribution Functions (PDF) in global fits. Measuring with precision top-quark production is a key factor for Beyond Standard Model (BSM) searches as it is the main background. The production of single-top quarks proceeds via three channels: t- and s-channels, when the top quark is produced by a W-boson in these channels, and the Wt channel, when the top quark is produced in association with a W-boson. The main backgrounds for single-top quark production are tt̅ and W+jets processes. Other background processes are Z+jets, diboson and multijet production. Top-pair production is usually classified according to the products of the W-decays in all-hadronic, semileptonic and dileptonic channels. Single-top and W+jets are the main background processes; for the all-hadronic channel, multijet background is also important. Recently, the ATLAS [bib:ATLAS1] Collaboration published several results measuring the top-quark production cross section both in pairs and singly. 2. Single-top cross section A measurement of the single-top quark production cross section in the s-channel in pp collisions at √(s) = 13 TeV with the ATLAS detector [bib:2209.089902] was performed. Previous measurements achieved significances of 2.5σ at √(s)=7 and 8 TeV by CMS [bib:1603.025553], and 3.2σ at √(s)=8 TeV by ATLAS [bib:1511.059804]. The new measurement at √(s) = 13 TeV using 139 fb^-1 was performed by selecting a charged isolated lepton (electron or muon), large E_T^miss and two b-tagged jets. The method used to extract the signal is based on matrix element calculations to compute a discriminant that assigns to each event the probability of being a signal or background process. The production cross section was measured using a binned profile maximum-likelihood fit of the discriminant distribution. The measured single-top cross section is σ=8.2^+3.5_-2.9 pb with a significance of the signal over the background-only hypothesis of 3.3σ. This result is in agreement with the theoretically predicted cross section computed at next-to-leading-order (NLO) with a value of σ^ SM=10.3±0.4 pb and an expected significance of 3.9σ. 3. Top-pair cross section Three new ATLAS measurements and one combination of ATLAS and CMS results were recently published. A measurement at √(s)=5.02 TeV, using ℒ=257 pb^-1 of ATLAS data [bib:2207.013545], combines the semileptonic and dileptonic channels. The dileptonic channel uses the event counts while the semileptonic uses a boosted decision tree to increase the separation of signal and background processes. The combined measured cross section is σ=67.5±2.7 pb which agrees with the NNLO+NNLL QCD prediction of σ^ SM=68.2^+5.2_-5.3 pb. For the measurement at √(s)=7 TeV, using ℒ=4.6 fb^-1 of ATLAS data [bib:2212.005716], a support vector machine method was applied in the semileptonic channel to separate signal and background processes. The measured cross section is σ=168.5^+7.1_-6.7 pb that is in agreement with the NNLO+NNLL QCD calculation of σ^ SM=177^+10_-11 pb. The cross section measured by ATLAS using the data taken during 2022 at √(s)=13.6 TeV with a ℒ=11.3 fb^-1 [bib:CONF7] is σ=859±29 pb, in agreement with the NNLO+NNLL QCD prediction of σ^ SM=924^+32_-40 pb. The measurement was performed using the event-count method in the dileptonic channel and selecting opposite sign and different flavour leptons in the final state. A combined measurement of the top-pair cross section was performed with ATLAS and CMS data using an integrated luminosity of 5 fb^-1 at √(s)=7 TeV and 20 fb^-1 at √(s)=8 TeV [bib:2205.138308]. The measurement was done using the decays with an opposite-charge eμ pair in the final state. The result of the measurement at √(s)=7 TeV is σ(7. TeV.)=178.5±4.7 pb, in agreement with the NNLO+NNLL QCD prediction of σ^ SM=177^+10_-11 pb. In the case of the cross section at √(s)=8 TeV, the measurement is σ(8. TeV.)=243.3^+6.0_-5.9 pb and the NNLO+NNLL QCD calculation is σ^ SM=255.3^+10.6_-12.2 pb. A measurement of the ratio of the cross section at √(s)=8 TeV and the cross section at √(s)=7 TeV was also performed, which yields a result of R_8/7=1.363±0.032, which is in agreement with the prediction of R_ SM=1.428^+0.005_-0.004. In addition, fits to the combined measurement were performed to extract m_t^ pole and α_ s(m_Z). The results are m_t^ pole=173.4^+1.8_-2.0 GeV (with α_ s(m_Z) fixed to 0.118±0.001) and α_ s(m_Z)=0.1170^+0.0021_-0.0018 (with m_t^ pole fixed to 172.5±1.0 GeV). 4. Differential cross sections for tt̅ production Differential cross sections were measured in the all-hadronic final state using boosted top quarks with ℒ=139 fb^-1 at √(s) = 13 TeV [bib:JHEP049]. This analysis used Deep Neural Networks for the signal extraction. The selection requires the leading (subleading) top-quark transverse momentum to be greater than 500 (300) GeV. At particle level, measurements of normalised single and double differential cross sections were performed and compared with NLO QCD predictions. Figure <ref> shows the normalised single differential cross section as a function of the transverse momentum of the leading top quark and the normalised double differential cross section as a function of the transverse momentum of the tt̅ system in different regions of the transverse momentum of the leading top quark. Normalised differential cross sections were also measured at parton level. Figure <ref> shows the cross section as a function of the transverse momentum of the leading top quark. The NLO predictions show disagreements with the data, whereas the NNLO predictions give an improved description of the data. This analysis includes an Effective Field Theory (EFT) interpretation performed using dim6top [bib:dim6top10] and EFTfitter [bib:EFTfitter11]. Seven Wilson coefficients in the Warsaw basis were individually fitted using the transverse momentum of the leading top-quark distribution. Quadratic terms were included in the fits, which lead to tighter bounds on the Wilson coefficients. Measurements of lepton kinematic distributions were performed for tt̅ processes selected in the eμ decay channel in pp collisions at √(s) = 13 TeV with ℒ=139 fb^-1 [bib:2303.1534012]. Absolute and normalised differential cross sections were measured. Figure <ref> shows the cross sections as functions of the transverse momentum and pseudorapidity of the lepton and the double differential cross section as a function of the azimuthal difference between the two leptons in different regions of their invariant mass. The precision of the measurements is 2-3% for the absolute cross sections and at ≈1% level for the normalised cross sections. The NLO QCD predictions do not describe all the measured observables simultaneously. The measurement of differential tt̅ cross sections with a high transverse momentum top quark at √(s) = 13 TeV with ℒ=139 fb^-1 [bib:JHEP0613] is based on the semileptonic channel; an hadronically decaying top quark was reconstructed as a R=1 jet with transverse momentum above 355 GeV. This analysis introduces a novel method which uses the reconstructed top-quark mass to reduce the impact of uncertainties from the jet energy scale by introducing an scale factor. The use of this method improves significantly the precision of the measurements. The results include single and double differential cross sections; Figure <ref> shows the single differential cross section as a function of the rapidity of the leptonically decaying top quark and the double differential cross section as a function of the difference in azimuthal angle between the hadronic top quark and the leading additional jet in different regions of the transverse momentum of the hadronically decaying top quark. No single prediction describes all the measured observables simultaneously. Next-to-next-to-leading-order predictions at parton level were used to reweight the NLO predictions and provide a comparison with the data; this comparison shows that the NNLO corrections are important given the precision of the measurements. An EFT interpretation of the results of this analysis was performed using SMEFT@NLO and EFTfitter. Two Wilson coefficients in the Warsaw basis [bib:WARSAW14] are individually and simultaneously fitted using the transverse momentum of the hadronically decaying top-quark distribution. Some of these fits provide tighter bounds than in the global fit [bib:EFT15]. 5. Conclusion Several analyses of the ATLAS Collaboration using the datasets of the Runs 1, 2 and 3 of the LHC were presented; these include measurements of single-top and tt̅ production at different centre-of-mass energies and in different decay channels. A combination with CMS data is also presented. From the results presented it may be concluded that leptonic channels offer a higher precision, though the novel methods developed reduce the uncertainties coming from jets. When comparing measurements with theoretical predictions it is observed that including NNLO QCD corrections improves the agreement with the data. Two BSM searches were performed using EFT interpretations with no evidence of new physics; tighter bounds for some Wilson coefficients were obtained. 6. References [bib:ATLAS1] ATLAS Collaboration, The ATLAS Experiment at the CERN Large Hadron Collider, https://iopscience.iop.org/article/10.1088/1748-0221/3/08/S08003JINST 3 (2008) S08003. [bib:2209.089902] ATLAS Collaboration, Measurement of single top-quark production in the s-channel in proton-proton collisions at √(s)=13 TeV with the ATLAS detector, https://link.springer.com/article/10.1007/JHEP06(2023)191JHEP 06 (2023) 191, arXiv: https://arxiv.org/abs/2209.08990. [bib:1603.025553] CMS Collaboration, Search for s channel single top quark production in pp collisions at √(s)=7 and 8 TeV, https://link.springer.com/article/10.1007/JHEP09(2016)027JHEP 9 (2016) 027, arXiv: https://arxiv.org/abs/1603.02555. [bib:1511.059804] ATLAS Collaboration, Evidence for single top-quark production in the s-channel in proton-proton collisions at √(s)=8 TeV with the ATLAS detector using the Matrix Element Method, https://www.sciencedirect.com/science/article/pii/S037026931600188X?via%3DihubPhys. Lett. B 756 (2016) 228, arXiv: https://arxiv.org/abs/1511.05980. [bib:2207.013545] ATLAS Collaboration, Measurement of the tt̅ production cross-section in pp collisions at √(s)=5.02 TeV with the ATLAS detector, https://link.springer.com/article/10.1007/JHEP06(2023)138JHEP 06 (2023) 138, arXiv: https://arxiv.org/abs/2207.01354. [bib:2212.005716] ATLAS Collaboration, Measurement of the inclusive tt̅ production cross section in the lepton+jets channel in pp collisions at √(s)= 7 TeV with the ATLAS detector using support vector machines, 2022, arXiv: https://arxiv.org/abs/2212.00571. [bib:CONF7] ATLAS Collaboration, Measurement of tt̅ and Z-boson cross sections and their ratio using pp collisions at √(s) = 13.6 TeV with the ATLAS detector, ATLAS-CONF-2023-006, 2023, url: https://cds.cern.ch/record/2854834. [bib:2205.138308] ATLAS and CMS Collaborations, Combination of inclusive top-quark pair production cross-section measurements using ATLAS and CMS data at √(s)= 7 and 8 TeV, 2022, arXiv: https://arxiv.org/abs/2205.13830. [bib:JHEP049] ATLAS Collaboration, Differential tt̅ cross-section measurements using boosted top quarks in the all-hadronic final state with 139 fb^-1 of ATLAS data, https://link.springer.com/article/10.1007/JHEP04(2023)080JHEP 04 (2023) 080, arXiv: https://arxiv.org/abs/2205.02817. [bib:dim6top10] D. Barducci et al., Interpreting top-quark LHC measurements in the standard-model effective field theory, 2018, arXiv: https://arxiv.org/abs/1802.07237. [bib:EFTfitter11] N. Castro et al., EFTfitter — A tool for interpreting measurements in the context of effective field theories, https://link.springer.com/article/10.1140/epjc/s10052-016-4280-9JHEP 04 (2023) 080, arXiv: https://arxiv.org/abs/2205.02817. [bib:2303.1534012] ATLAS Collaboration, Inclusive and differential cross-sections for dilepton tt̅ production measured in √(s)=13 TeV pp collisions with the ATLAS detector, 2023, arXiv: https://arxiv.org/abs/2303.15340. [bib:JHEP0613] ATLAS Collaboration, Measurements of differential cross-sections in top-quark pair events with a high transverse momentum top quark and limits on beyond the Standard Model contributions to top-quark pair production with the ATLAS detector at √(s)=13 TeV, https://link.springer.com/article/10.1007/JHEP06(2022)063JHEP 06 (2022) 063, arXiv: https://arxiv.org/abs/2202.12134. [bib:WARSAW14] B. Grzadkowski et al., Dimension-Six Terms in the Standard Model Lagrangian, https://link.springer.com/article/10.1007/JHEP10(2010)085JHEP 10 (2010) 085, arXiv: https://arxiv.org/abs/1008.4884. [bib:EFT15] SMEFiT Collaboration, Combined SMEFT interpretation of Higgs, diboson, and top quark data from the LHC, https://link.springer.com/article/10.1007/JHEP11(2021)089JHEP 11 (2021) 089, arXiv: https://arxiv.org/abs/2105.00006.
http://arxiv.org/abs/2307.04265v1
20230709210635
Enhancement and anisotropy of electron Lande factor due to spin-orbit interaction in semiconductor nanowires
[ "J. Czarnecki", "A. Bertoni", "G. Goldoni", "P. Wójcik" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
[email protected] AGH University of Science and Technology, Faculty of Physics and Applied Computer Science, Al. Mickiewicza 30, 30-059 Krakow, Poland [email protected] CNR-NANO S3, Istituto Nanoscienze, Via Campi 213/a, 41125 Modena, Italy [email protected] Department of Physics, Informatics and Mathematics, University of Modena and Reggio Emilia, via Campi 213/a, 41125 Modena, Italy CNR-NANO S3, Istituto Nanoscienze, Via Campi 213/a, 41125 Modena, Italy [email protected] AGH University of Science and Technology, Faculty of Physics and Applied Computer cience, Al. Mickiewicza 30, 30-059 Krakow, Poland We investigate the effective factor in semiconductor nanowires with strong Rashba spin-orbit coupling. Using the 𝐤·𝐩 theory and the envelope function approach we derive a conduction band Hamiltonian where the tensor is explicitly related to the spin-orbit coupling contants . Our model includes orbital effects from the Rashba spin-orbit term, leading to a significant enhancement of the effective factor which is naturally anisotropic. For nanowires based on the low-gap, high spin-orbit coupled material InSb, we investigate the anisotropy of the effective factor with respect to the magnetic field direction, exposing a twofold symmetry for the bottom gate architecture. The anisotropy results from the competition between the localization of the envelope function and the spin polarization of the electronic state, both determined by the magnetic field direction. Enhancement and anisotropy of electron factor due to spin-orbit interaction in semiconductor nanowires Paweł Wójcik February 2023 ======================================================================================================= § INTRODUCTION Semiconductor nanowires (NWs) continue to attract significant interest due to the abundance of physical phenomena observed in such nanostructures, as well as the wealth of potential applications, including optoelectronics,<cit.> quantum computing,<cit.> or spintronics.<cit.> Applications in spintronics are largely driven by the spin-orbit (SO) interaction, which – in low energy gap semiconductors, such as InAs or InSb – is sufficiently strong to enable electrical control of the electron spin. In general, the SO interaction originates from the lack of the inversion symmetry, which could be an intrinsic feature of the crystallographic structure (Dresselhaus SO coupling <cit.>) or induced by the asymmetry of the confinement potential (Rashba SO coupling<cit.>). The latter has the essential advantage of being tunable by external fields, e.g., using gates attached to the nanostructures, as predicted theoretically<cit.> and demonstrated in recent experiments.<cit.> The significant progress in heteroepitaxy, which has been made over the last decade, enables the growth of a thin superconducting layer on the surface of semiconductor.<cit.> In this respect hybrid NWs with a large SO interaction are recently intensively studied as the basic building blocks for topological quantum computing based on Majorana zero modes.<cit.> These exotic states are formed at the ends of NWs when the system becomes spinless, which is achieved in experiments by applying a magnetic field and the corresponding spin Zeeman effect.<cit.> The induced topological gap strongly depends on the strength of the SO coupling and the energy of the Zeeman splitting,<cit.> usually expressed in terms of a linear response to the magnetic field with a proportionality constant – the effective factor. In other words, the electron determines the strength of the magnetic field required to trigger the system into the topological phase. For this reason, it is desirable to make it as large as possible, as the magnetic field needed for the topological transition is required to be lower than the critical magnetic field of the superconducting shell.<cit.> In semiconducting materials is significantly different from the free-electron factor g_0, due to coupling between the valence and the conduction band. In the second-order perturbation 𝐤·𝐩 theory it leads to the Roth-Lax-Zwerdling () formula,<cit.> which for low gap semiconductors gives ≫ g_0, e.g. ≈ -49 for InSb. In particular, for semiconductor nanostructures the formula predicts a reduction of the effective factor,<cit.> as the subband confinement increases the energy gap, which is inversely proportional to .<cit.> However, unexpectedly, recent experiments in NWs based on InAs and InSb exhibit opposite behaviour - the extracted is up to three times larger than the bulk value.<cit.> Furthermore, in Ref. Marcus2018 a step like evolution of has been reported as a function of the gate voltage. It has been recently proposed that this surprising behaviour arises from the 𝐋·𝐒 coupling, which for higher subbands (characterized by the large orbital momentum) leads to an enhancement of by about one order of magnitude.<cit.> In this paper we develop a full 8 × 8  k·p theory of the effective factor in semiconductor NWs which takes into account the orbital effects in the SO coupling terms induced by an external magnetic field of arbitrary direction. We show explicitly that the response to the magnetic field can be described in terms of a tensor , whose elements originate from the vector of Rashba coupling constants . For a nanowire based on the low-gap, strongly SO coupled material InSb, we performed fully self-consistent calculations taking into account on equal footing orbital and Zeeman effect of the applied magnetic field, SO coupling and the electrostatic environment. We demonstrate that the orbital component of ensuing from the SO interaction may be greater than the one determined by the bulk and may represents a major component of , leading to the enhancement of the effective factor by an order of magnitude, even for the lowest subband, the one usually considered in Majorana experiments. Finally, we also evaluate the anisotropy of the SO induced tensor with respect to the magnetic field rotated in different planes. Our results qualitatively agree with recent experiments<cit.> reproducing the enhancement of g^* and its anisotropy. The paper is organized as follows. In Sec. II A the tensor is derived from the 8 × 8  k·p model within the envelope function approximation. Details on the numerical method are given in Sec. II B. Sec. III contains results of our calculations for homogeneous InSb NWs and their discussion with respect to recent experiments. Sec. IV summarizes our results. § THEORETICAL MODEL Below we shall derive a k·p formulation of the factor in semicondutor NWs. We shall specifically consider a homogeneous InSb, with hexagonal cross section, grown in the zincblede crystallographic structure along the [111] direction. This particular orientation preserves the crystal inversion symmetry, resulting in the reduction of the Dresselhaus SO coupling term.<cit.> The system is subjected to a uniform external magnetic field with intensity B. The direction of the applied magnetic field with respect to the NW axis is determined by two angles: θ, the angle between the field and the NW axis (z), and φ, the angle between the x axis (oriented along the corner-corner direction) and the projection of the field on the xy plane – see Fig. <ref>. Hence, B = [B_x,B_y,B_z]^T = B [sin(θ)cos(φ), sin(θ)sin(φ),cos(θ) ]^T . We adopt the symmetric vector potential A(r) = [ -yB_z/2, xB_z/2, yB_x - xB_y ]^T . If not stated otherwise, we assume that the back gate is attached directly to the bottom facet of NW, generating an electric field parallel to the nanowire section in the xy plane. Although in real experiments a dielectric layer separating the NW from the gate is usually used, it plays a role of screening for the electric field. Hence, the value of the factor obtained for a particular gate voltage V_g can be considered as the maximum achievable value at that specific V_g. §.§ 𝐤·𝐩 theory of the tensor Our model is based on the 8 × 8  k·p approximation described by _8 × 8 = [ _c _cv; _cv^† _v ], where _c is Hamiltonian of the conduction band electrons corresponding to Γ_6c band. In the presence of the magnetic field _c can be written as _c = H_Γ_6cI_2 × 2+1/2μ_Bg_0 σ·𝐁, where the second term corresponds to the Zeeman spin effect, where μ_B is the Bohr magneton, g_0 is the factor of the free electron and σ=(σ_x,σ_y, σ_z) is the vector of Pauli matrices, while Ĥ_Γ_6c = P̂^2/2m_0 + E_c + V(𝐫), where P̂ = p̂ - eA, e is the electron charge, m_0 is the free electron mass and E_c is the conduction band minima. The potential V(𝐫) in (<ref>) contains interaction of electrons with the electric field generated by the external gates and the electron-electron interaction included in our model at the mean field level (Hartree potential), V(𝐫)=V_g(𝐫)+V_H(𝐫). Below we shall use a folding procedure of _8 × 8 to the conduction band sector, where in the Hamiltonian _v, related to valance bands Γ_8v and Γ_7v, all off-diagonal elements are neglected. Then, _v can be written as _v = H_Γ_8vI_4 × 4⊕ H_Γ_7vI_2 × 2, with H_Γ_7v = E_v' = E_c + V(𝐫) - E_0 - Δ_0 , H_Γ_8v = E_v = E_c + V(𝐫) - E_0 , where E_0 is the energy gap and Δ_0 is the energy of spin-orbit splitting in the valence band. The coupling between the conduction band and the valence band is described by the off-diagonal matrix _̋cv, _̋cv = P_0/ħ[ P̂_+/√(6) 0 P̂_-/√(2) -√(2)P̂_z/√(3) -P̂_z/√(3) P̂_+/√(3); ; -√(2)P̂_z /√(3) -P̂_+/√(2) 0 -P̂_-/√(6) P̂_-/√(3) P̂_z /√(3) ] , where P̂_± = P̂_x ±P̂_y and the parameter P_0 = -iħ/m_0⟨ S|p̂_x|X⟩ accounts for the coupling between conduction and valence bands at the Γ point of the Brillouin zone. Using the standard folding-down transformation, we can reduce the 8× 8 𝐤·𝐩 model (<ref>) into the effective 2× 2 Hamiltonian for conduction electrons _̋𝑒𝑓𝑓 = _̋c + _̋cv(_̋v - E)^-1_̋cv^† = _̋c + _c. In the above formula, _c can be written in terms of Pauli matrices _c = λ_0I_2 × 2 + λ·σ, where λ_0 = P_0^2/3ħ^2 [P̂_x (2/E_v + 1/E_v' )P̂_x + P̂_y (2/E_v + 1/E_v' )P̂_y ], λ_x = iP_0^2/3ħ^2 [P̂_z (1/E_v - 1/E_v')P̂_y - P̂_y(1/E_v - 1/E_v' )P̂_z ], λ_y = iP_0^2/3ħ^2[P̂_x(1/E_v - 1/E_v')P̂_z - P̂_z(1/E_v - 1/E_v')P̂_x ], λ_z = iP_0^2/3ħ^2[P̂_y (1/E_v - 1/E_v' )P̂_x - P̂_x(1/E_v - 1/E_v' )P̂_y ]. The first term in Eq. (<ref>) leads to the standard formula for the effective mass 1/m^* = 1/m_0+2P_0^2/3ħ^2 ( 2/E_v + 1/E_v' ) , while the second term corresponds to the Rashba SO coupling. If we assume that E_0 and Δ_0 are the largest energies in the system we can expand E_v(v') in Eqs. (<ref>-<ref>) to the second order in energy. Then, Eqs. (<ref>)-(<ref>) can be rewritten as λ_x = - α_R^y ( k_z - e/ħA_z ) - eP_0^2/3ħ ( 1/E_0 - 1/E_0 + Δ_0 )B_x, λ_y = α_R^x ( k_z - e/ħA_z ) - eP_0^2/3ħ ( 1/E_0 - 1/E_0 + Δ_0 )B_y, λ_z = α_R^y ( k̂_x - e/ħA_x ) - α_R^x ( k̂_y - e/ħA_y ) - eP_0^2/3ħ ( 1/E_0 - 1/E_0 + Δ_0 )B_z, where α_R = (α_R^x, α_R^y, α_R^z) = P_0^2/3 ( 1/E_0^2 - 1/ ( E_0 + Δ_0 )^2 )∇ V(x,y) is the Rashba SO coupling constant. Note that in Eqs. (<ref>, <ref>) we have already omitted α_R^z terms since the magnetic field does not break translational invariance along the wire axis, i.e., Ψ_n,k_z(x,y,z) =ψ_n,k_z(x,y)e^ik_zz =[ψ_n,k_z^↑(x,y),ψ_n,k_z^↓(x,y)]^Te^ik_zz . Finally, the effective Hamiltonian for conduction electrons can be written as _̋𝑒𝑓𝑓 = ( P^2/2m^* + E_c + V(𝐫) ) I_2 × 2+(α_R^xσ_y-α_R^yσ_x)k_z + (α_R^y k̂_x-α_R^x k̂_y)σ_z + 1/2μ_B 𝐁σ where is a tensor given by = g_RLZ𝐈_3×3+𝐠_SO , where g_RLZ = g_0 - 2E_p/3(1/E_0 -1/E_0 + Δ_0) . corresponds to the well-know formula<cit.>, with E_p = 2m_0P_0^2/ħ^2 and the tensor 𝐠_SO results from the orbital effects of the magnetic field in the SO Hamiltonian 𝐠_SO = [ g^xx_SO g^xy_SO 0; g^yx_SO g^yy_SO 0; 0 0 g^zz_SO ]. For the assumed vector potential, the individual elements of this tensor can be expressed as g^xx_SO = 2e/μ_Bħα_R^y y, g^yy_SO = 2e/μ_Bħα_R^x x, g^zz_SO = e/μ_Bħ( α_R^y y - α_R^x x ), g^xy_SO = -2e/μ_B ħα_R^x y, g^yx_SO = -2e/μ_B ħα_R^y x. which shows that depends linearly on the vector of Rashba SO coupling constants α_R. Since the Rashba coefficients and the factor are functions of space [see Eqs. (<ref>, <ref>)], they may not be easily compared to experiments. Therefore, in the following part of the paper we discuss the matrix elements of the Rashba SO coupling constants ⟨α_R^x(y)(k_z) ⟩ _n = ⟨ψ_n,k_z |α_R^x(y)σ_y(x)|ψ_n,k_z⟩ and the individual diagonal and off-diagonal matrix elements of 𝐠_SO, respectively defined as ⟨ g_SO^xx(yy,zz)(k_z) ⟩ _n = ⟨ψ_n,k_z |g^xx(yy,zz)_SOσ_x(y,z)|ψ_n,k_z⟩, ⟨ g_SO^xy(yx)(k_z) ⟩ _n = ⟨ψ_n,k_z |g^xy(yx)_SOσ_y(x)|ψ_n,k_z⟩, where |ψ_n,k_z⟩ is the in-plane part of the n-th envelope functions of NW, to be calculated as described in the following section. §.§ Numerical calculations To understand the physics behind the behaviour of the factor in NWs with strong SO coupling, we use a numerical approach taking into account important key ingredients, namely the orbital and Zeeman effect, SO coupling and electrostatic environment. For this purpose, we employ a standard Shrödinger-Poisson approach. Assuming the translational invariance along the growth axis z, the envelope functions ψ_n,k_z(x,y)=[ψ_n,k_z^↑(x,y),ψ_n,k_z^↓(x,y)] can be determined from the Schrödinger equation [ ( P̂_2D^2/2m^* + 1/2m^*ω_c^2[(ycosθ-xsinθ)sinϕ-k_xl_B^2]^2 + E_c + V(𝐫) ) I_2 × 2+(α_R^xσ_y-α_R^yσ_x)k_z + (α_R^y k̂_x-α_R^x k̂_y)σ_z + 1/2μ_B 𝐁σ ] ψ_n,k_z(x,y)=E_n,k_zψ_n(x,y), where α_R^x(y) and are functions of the position (x,y), ω_c=eB/m^* is the cyclotron frequence, l_B=√(ħ/eB) is the magnetic length and P̂_2D^2= ( p̂_x+eBycosϕ/2 )^2 + ( p̂_y-eBxcosϕ/2 )^2. Note that in the presence of magnetic field and spin-orbit coupling the Hamiltonian (<ref>) depends on the k_z vector. The calculations are carried out on a uniform grid in the range [-k_z^max,k_z^max] where k_z^max is chosen to be much larger than the Fermi wave vector. The self-consistent potential V(𝐫) in Eq. (<ref>) is determined at the mean field level by solving of the Poisson equation ∇ _2D^2V(x,y)=-n_e(x,y)/ϵ_0ϵ where ϵ is a dielectric constant and the electron density n_e can be calculated based on the formula n_e(x,y)=∑_n ∫_-k_z^max^k_z^max1/2π |ψ_n,k_z(x,y)|^2 f(E_n,k_z-μ,T) dk_z where μ is the chemical potential, T is the temperature and f(E,T) is the Fermi-Dirac distribution. In the applied Shrödinger-Poisson approach, equations (<ref>) and (<ref>) are solved alternatively until the self-consistency is reached, which we consider to occur when the relative variation of the charge density between two consecutive iterations is lower than 0.001. In each iteration a spatial distribution of α_R^x(y) and g_SO^ab, where a,b={x,y,z}, are determined based on Eqs. (<ref>) and (<ref>). The numerical calculations are carried on the triangular grid corresponding to the hexagonal symmetry of the nanowire cross-section, to avoid artifacts at the boundaries which may appear when using smaller grid densities.<cit.> We assume Dirichlet boundary condition corresponding to the assumed gate architecture. Finally, the self-consistent potential V(x,y) and the corresponding wave functions ψ_n,k_z(x,y) are used to determine ⟨α_R^x,(y)⟩ _n, ⟨ g^xx(yy,zz)_SO⟩ _n and ⟨ g^xy(yx)_SO⟩ _n tensor elements according to Eqs. (<ref>, <ref>). Calculations have been carried out for the material parameters corresponding to InSb: E_0=0.235 eV, Δ_0=0.81 eV, m^*=0.014, E_P= 2m_0P/2 = 23.3 eV, T=4.2 K, and for the nanowire width W=100 nm (corner-to-corner). We keep the constant linear electron density at the low level n_e=8× 10^7 cm^-1 which guarantees that only the lowest subband is occupied in the range of the considered magnetic field B=[0,4] T. § RESULTS We shall now discuss the tensor as a function of the magnetic field intensity and direction. As g_RLZ evaluated from the formula (g_RLZ=-49 for the present material) does not depend on the magnetic field, we put particular emphasis on the role of the SO-induced component 𝐠_SO in terms of the tensor elements, Eqs. (<ref>). As shown in the previous section, corrections to the factor coming from the SO interaction are in general wave-vector dependent, which results from the orbital effects of the magnetic field. For this reason, we shall investigate 𝐠_SO as a function of both the wave vector and the magnetic field. We limit our study to the lowest subband assuming an the electrical potential is applied to the bottom gate to induce SO coupling. For simplicity, in the rest of the paper we omit the subband index in Eqs. (<ref>), (<ref>), i.e. ⟨…⟩ _n=1=⟨…⟩. §.§ Enhancement of due to SO coupling First, we show that a magnetic field oriented along the x axis, i.e., perpendicular to the NW axis and to the direction of ⟨α_R⟩, results in a substantial enhancement of the effective factor. For this purpose, we assume that V_g=0.2 V is applied to the bottom gate, generating an electric field that mantains reflection symmetry with respect to the y axis, hence ⟨α_R⟩ is directed along y by symmetry. In Fig. <ref>(a), we show the diagonal element ⟨ g_SO^xx⟩ as a function of the wavevector and the magnetic field intensity. Note that with this field configuration the off-diagonal elements vanish by symmetry. Indeed, the reflection symmetry of the electric field with respect to the y axis leads to ⟨α_R^x ⟩ =0, hence ⟨ g_SO^xy⟩ =0 [see Eq. (<ref>)]. Moreover, the even symmetry of the envelope function is unaffected by the magnetic field directed along the x, hence ⟨ g_SO^yx⟩=0 [see Eqs. (<ref>)]. Fig. <ref>(a) clearly demonstrates that the correction to the effective factor arising from the orbital effects in the SO coupling term reaches a value similar to that obtained from the RLZ formula. Under certain conditions, this enhancement can lead to a significant increase of g^*, almost doubling it, as observed in recent experiments. <cit.> In Fig. <ref>(a) we distinguish three regions, with positive (yellow), negative (purple) and vanishing (black) ⟨ g_SO^xx⟩. The abrupt change of sign between positive and negative regions is simply understood as the crossing of subbands of opposite spin, since only the value for the lowest subband is shown here. Indeed, as shown in Fig. <ref>(b), the subband of opposite spin cross at k_z=0 at vanishing field. When the field is switched on, both subband shift to negative k_z and shift in energy due to Zeeman term. Hence, the crossing shifts linearly with the field to more negative wavevectors, as shown in Fig. <ref>(a). For sufficiently large k_z>0 and field intensity, ⟨ g_SO^xx⟩ almost vanishes, as shown in Fig. <ref>(a) - black region. This is due to the localization and symmetry of the envelope functions, which are determined by the orbital coupling to the magnetic field. Figure <ref>(a) illustrates maps of the position-dependent SO coupling constants α_R^x(y) [see Eq. (<ref>)] calculated in a self-consistent cycle at B=0. The spatial distribution is primarily influenced by the electric field generated by the bottom gate and do not undergo significant changes as the magnetic field increases. Since the value of 𝐠_SO elements depends on the Rashba SOC constant, the SO-induced modification of the factor for a specific subband is most significant when its envelope function is localized in the regions of strong Rashba SOC, which, in turn, is determined by both the electric and the magnetic field, as we discuss below. In Fig. <ref>(b), we report the squared envelope functions of the lowest subbands at k_z=0 and k_z=0.4 nm^-1 at increasing magnetic fields. At k_z=0 there is no kinetic coupling to the magnetic field and the localization of the envelope function is only determined by the electric field; hence, it concentrates near the bottom gate, where the SOC is strong. For a positive wave vectors k_z, instead, the orbital effects shift the wave function towards the opposite facet of the NW, where the SO coupling is weak, leading to vanishing ⟨ g^xx_SO⟩, which explains the black region in Fig. <ref>(a). As shown in Fig. <ref>(a), the stronger the magnetic field, the lower k_z is required to push the wave function away from the region with large SO coupling, near the bottom facet. Naively, one might expect that the state k_z=0 would not be affected by this phenomenon as there is not orbital coupling to the magnetic field for this state. However, it should be noted that for high magnetic fields, diamagnetic effects become dominant, causing the wave functions to localize in the middle of NW along the field direction, resembling dispersionless Landau levels, as shown in Fig. <ref>(b). As the position of this wave function is associated with low SO coupling regions, ⟨ g^xx_SO⟩ gradually decreases towards zero, even for k_z=0, as illustrated in Fig. <ref>(a). We next discuss the behavior of the SO-induced factor with the magnetic field directed either parallel to α_R (along the y axis) or to the NW axis (along the z axis). When the magnetic field is applied parallel to α_R, ⟨ g^yy_SO⟩ 0. However, its magnitude, shown in Fig. <ref>(a), is not as large as ⟨ g^xx_SO⟩ in the perpendicular orientation – compare with Fig. <ref>(a). In this configuration the off-diagonal element ⟨ g_SO^yx⟩ is non-negligible, in contrast to ⟨ g_SO^xy⟩ which is nearly zero as the avarage value of α_R^x is vanishing due to the gate symmetry. Again, the evolution of both ⟨ g_SO^yy⟩ and ⟨ g_SO^yx⟩ as a function of the magnetic field, shown in Fig. <ref>(a) and Fig. <ref>(b), respectively, is determined by the localization and symmetry of the wave function. In Fig. <ref>(b), one can observe that at zero magnetic field, the wave function sets itself at the center-bottom of the NW. In this region, α_R^x is antisymmetric with respect to the x axis, resulting in the significant suppression of ⟨ g_SO^yy⟩ and ⟨ g_SO^xy⟩, which vanish at k_z=0. The symmetry of the wave function is broken by the magnetic field, as depicted in Fig. <ref>(b). For k_z = 0.4 nm^-1, for increasing magnetic fields, the wave function is first localized at the bottom-left corner, where the contribution from negative α_R^x leads to non-zero values of ⟨ g_SO^yy(yx)⟩, and eventually in the left corner, where α_R^x is significantly lower, resulting in a decrease in ⟨ g_SO^yy(yx)⟩. This field-induced evolution leads to the maximum of ⟨ g_SO^yy(yx)⟩ at a certain k_z value, as illustrated in Fig. <ref>(a,b). We next consider a magnetic field applied in z-direction, i.e., along the NW axis. The finite value of ⟨ g_SO^zz⟩, shown in Fig. <ref> has a different nature, since the orbital effects of magnetic field are highly reduced by the confinement. In this case the localization of the wavefunction is not measurably changed with the magnetic field, regardless of k_z, and thus it does not determine the evolution of ⟨ g_SO^zz⟩ with k_z and B_z. Rather, it is governed by the interplay between the Zeeman effect and the SO interaction, which mixes the spin states. Note that all tensor elements ⟨ g_SO^xx(yy,zz)⟩ [see Eqs. (<ref>)], are in fact defined by the energy splitting caused by the orbital effects from the SO interaction what makes ⟨ g_SO^zz⟩ sensitive to the relative distribution of spin up and down component in the spinor. Since the SO coupling depends on the wave vector, for a small k_z the ordinary Zeeman effect is dominant, aligning the electron spin along the magnetic field direction and - in the limit of k_z=0 - makes the system spin polarized along the z axis. The expectation value of σ_z in this case is the largest, resulting in the large value of ⟨ g_SO^zz⟩. In other words, the value of ⟨ g_SO^zz⟩ for small k_z results from the finite Rashba couplings near the bottom gate, where the wave function is localized and the almost complete z-spin polarization of electrons induced by the magnetic field. As a consequence, ⟨ g_SO^zz⟩ is independent of the magnetic field magnitude at k_z=0,. On the other hand, for a large value of k_z and low magnetic field, the SO coupling plays a major role, forcing the electron spin to align along the effective Rashba field directed in the x axis. In this scenario, the spin-up and spin-down components of the spinor become almost equal, resulting in a decrease in ⟨ g_SO^zz⟩. It is worth noting that even for a large k_z and strong SO coupling, an increasing magnetic field can deviate the electron spin direction from the x towards the z axis, leading to an overall increase in ⟨ g_SO^zz⟩ with the magnetic field, as depicted in Fig. <ref>. Finally, note that results presented in Fig. <ref> for the magnetic field directed along the z-axis at k_z=0 corresponds to the physical situation considered theoretically in Ref. Winkler2017, where the enhancement of the effective factor has been recently predicted in semiconductor NWs. The predicted effect was however restricted to the higher subbands characterized by the large orbital momentum. Here, we show that the enhancement of g^* can be also observed for the lowest state for which it can be induced by the orbital effects form the spin-orbital term. To summarize this section, we conclude that the significant enhancement of the effective factor observed in recent experiments can be explained as resulting from the orbital effects from the SO coupling. Our results demonstrate that this enhancement is most pronounced when the magnetic field is directed perpendicular α_R. In Fig. <ref> we show the gate voltage dependence of ⟨ g_SO^xx⟩, calculated for a magnetic field directed along the x axis with B_x = 1 T. It can be observed that the inclusion of the SO effects may lead to a substantial increase of the effective factor 𝐠_SO, reaching up to three times the value obtained from the RLZ formula. §.§ Spin-orbital induced factor anisotropy We next analyze the anisotropy of 𝐠_SO with respect to the field direction. For this purpose we consider a magnetic field with intensity B=1 T rotated in (i) the xz plane (φ = 0), (ii) the xy plane (θ = π/2) and (iii) the yz plane (φ = π /2). To induce Rashba SO coupling we apply a gate voltage V_g = 0.2 V. Figures <ref>(a),(b) show maps of ⟨ g_SO^xx⟩ and ⟨ g_SO^zz⟩ as a function of the wave vector k_z and θ when the magnetic field is rotated in the xz plane. The black region on the right sides of both panels originates from the localization of the wave function far away from the bottom gate, in the region where the SO coupling is weak. This is apparent in Fig. <ref>, which shows the squared wave function for k_z = 0.4 nm^-1 under different magnetic field orientations. Interestingly, we observe unusual behavior in the region where ⟨ g_SO^xx⟩ changes sign. As discussed earlier, when the magnetic field is directed along the x-axis, this sign change is due to subband crossing. However, here the finite z-component of the magnetic field, perpendicular to the effective Rashba field, causes anticrossing of the subbands. The magnitude and position of the anticrossing in wave vector space depend on the orientation of 𝐁. The behavior of ⟨ g_SO^xx⟩ damping to zero at the sign change region, accompanied by a maximum in |⟨ g_SO^zz⟩ |, can be explained by considering the evolution of electron spin at the anticrossing. Figure <ref> presents the z-spin polarization of the lowest subbands, defined as P = ∫ (|ψ^↑_k_z(x,y)|^2 - |ψ^↓_k_z(x,y)|^2) dxdy, as a function of k_z for different angles, θ. We observe that at the anticrossing, the states become completely z-spin polarized, which maximizes |⟨ g_SO^zz⟩|. Simultaneously, the average value of σ_x, which determines |⟨ g_SO^xx⟩| [see Eq. (<ref>)], becomes zero, which explains its vanishing for a specific k_z vector. The evolution of the SO-induced factor in the other rotation planes, as depicted in Figs. <ref>(c-i), is in general a result of the interplay between the wavefunction localization, which is determined by orbital effects, and the electron spin direction, which is defined by both the SO interaction and the external magnetic field. It is worth noting that when the magnetic field has a component along the y or the z axis, the off-diagonal elements of the 𝐠_SO tensor may also contribute significantly to the effective factor - the magnitudes of ⟨ g_SO^xy(yx)⟩ in Figs. <ref>(d,e,g) are comparable to those of the diagonal elements. Although the maps of the 𝐠_SO tensor elements presented so far provide valuable information and offer a precise representation of the physical phenomena underlying their evolution, it becomes challenging to directly compare them with results of recent experimental evidence. In experiments, the k_z vector is often not well-defined, and what is typically obtained is an average value of over all electronic states involved in the transport. For this reason we define the mean value of 𝐠_SO tensor elements averaged over all occupied states g^ab_SO = ∑_k_z |⟨ g^ab_SO(k_z) ⟩ | f(E_n=1,k_z - μ, T)/∑_k_z f(E_n=1,k_z - μ, T), where a,b={x,y,z}. Such an approach has been recently used for analyzing the SO coupling in NWs and good agreement with experiments has been obtained.<cit.> In Fig. <ref> we show the mean value of the tensor elements g^ab_SO and the Rashba SO constant α^y_R (defined in the same manner) for three different rotation planes. We observe that irrespective of the rotation plane, all elements g^ab_SO exhibit strong anisotropy with a two-fold symmetry, closely corresponding to the evolution of the SO coupling, shown in Fig. <ref>(d-f) (with a bottom gate α^x_R=0 due to the symmetry along the y axis and it is not shown). A similar two-fold symmetry with respect to the magnetic field direction has been recently observed in the Rashba SO coupling measured for suspended InAs NW.<cit.> In both cases, the symmetry arises from the bottom gate architecture, which induces a large SO coupling near the bottom facet, while the rotating magnetic field alters localization of the wave function, due to the orbital effects. Since g_SO^zz is rather sensitive to the spin polarization of electronic states than the orbital effects, we do not observe a direct correspondence between g_SO^zz and α^y_R - compare Figs. <ref>(c,f). It is noteworthy that g_SO^xx remains the most robust against the rotation in the xy plane [see Fig. <ref>(b)], and it dominates over other terms for the considered gate setup. This can be attributed to the large coupling constant α^y_R induced by the bottom gate voltage and the broken symmetry with respect to the x-axis – see Eq. (<ref>). Finally, it should be emphasized that the off-diagonal tensor components are one order of magnitude smaller than the diagonal ones. This observation holds true for the considered bottom gate configuration, which preserves symmetry around the y axis, but it may differ for more sophisticated gate configurations as presented in the next subsection. §.§ Different gate configuration In order to analyze in detail the magnitude of the off-diagonal elements of the 𝐠_SO tensor let us now consider an asymmetric gate configuration with two gates attached to the top and left-top facet. In this case the voltage applied to the gates generate both the x and y component of the Rashba SO coupling - see Fig. <ref>(d-f). In particular, the negative voltage generates the effective band bending near the gates similar to that observed in the Majorana NWs at the superconductor/semiconductor interface. Thus, in some sense the presented architecture can be treated as a first approximation of typical Majorana NWs with a superconducting shell covered the top and left-top facet, but with one important difference, i.e. that the factor in the Majorana NWs are additionally modulated by the presence of metallic shell.<cit.> As shown in Fig. <ref>(a), in this configuration, the off-diagonal elements of 𝐠_SO are of the same order of magnitude as the diagonal elements. This additional contribution plays a role in enhancing the overall effective factor. While the general principle that the largest SO-induced factor occurs when the magnetic field is perpendicular to α _R is observed also for the this gate configuration, it is remarkable that even for the magnetic field aligned along the NW axis, the configuration relevante to Majorana experiments, there is a significant enhancement in g_SO^zz. Consequently, we believe that our model, when applied to higher gate voltages, can account for the observed twofold enhancement of the effective factor, as recently observed in Majorana NWs.<cit.> § SUMMARY Based on the 𝐤·𝐩 theory within the envelope function approximation, we have analyzed the effective factor induced by the SO coupling in homogeneous semiconductor NWs under different magnetic field and gate configurations. By considering the orbital effects in the kinetic and SO terms, we have obtained the 𝐠_SO tensor and studied its elements with respect to the magnetic field magnitude and orientation. Our findings demonstrate that the effective factor induced by SO interaction is proportional to the Rashba coupling constant, which arises from the electric field generated by the adjacent gates. We have found that 𝐠_SO is determined by two factors: 1) position and symmetry of the electron’s wave function, which can be tuned by the orbital effects, 2) the z spin polarization of the electronic state. Specifically, when we apply the magnetic field perpendicular to NW, the inversion symmetry of the envelope functions is broken and the wave function is squeezed to the NW surface by a k_z-dependent effective potential. This effect results in an enhancement of 𝐠_SO in a situation when the envelope function is squeezed to the facet near the gate where the electric field and consequently the Rashba SO coupling is larger. The opposite magnetic field (or k_z) results in the squeezing of wave function to the opposite facet where electric field from the gate and the corresponding SO coupling is weak, which results in nearly zero 𝐠_SO. On the other hand, for 𝐁 directed along the NW axis the orbital effects are strongly reduced by the confinement and 𝐠_SO depends on the z component of spin polarization, which is a resultant of the magnetic and effective Rashba field. Our results explains the recently demonstrated enhancement of the effective factor observed in semiconductor NWs as well as its anisotropy.<cit.> Note that although our simulations have been limited to the regime where only the lowest subband is occupied, from our previous papers we expect that the electron-electron interaction, here introduced at the mean-field level, could be essential in estimating factor, via charge localization. At the high concentration regime total energy is minimized by reducing repulsive Coulomb energy, moving electrons outwards, and charge localizes at the six quasi-1D channels at the edges. As we discussed in Ref. Wojcik_anizotropy, this strong localization is almost insensitive to the gate potential and the magnetic field direction. Finally, we would like to underline that our model does not include the hole bands coupling expressed in the 𝐤·𝐩 model by the Lüttinger parameters.<cit.> Note however, that as recently shown in Ref. Escribano the applied conduction band approximation underestimates the SO coupling constant for the considered zinc-blende crystal structure. As the considered SO induced factor depends on the Rashba SO constants, we expect that the renormalization of the effective observed in the experiments should be even greater than predicted by our results. § ACKNOWLEDGEMENT The work was supported in part by PL-Grid Infrastructure, grant no. PLG/2022/015712. * § SIZE DEPENDENCE Calculations presented in the paper have been carried out for the NW width W=100 nm for two reasons. First, it is a typical diameter of NWs fabricated by the commonly used fabrication methods and second, for this range of NW width, orbital effects considered here become significant. For completeness, in Fig. <ref> we present g_SO^xx and g_SO^zz calculated with a magnetic field along the x and z directions, respectively. As expected, for a small diameter, when the orbital effect are highly reduced, the spin-orbital induced factor approaches zero, which shows that the predicted enhancement of is observable only for NWs of moderate or large width. 49 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Reimer et al.(2011)Reimer, van Kouwen, Barkelind, Hocevar, van Weert, Algra, Bakkers, Björk, Schmid, Riel, Kouwenhoven, and Zwiller]Reimer author author M. E. Reimer, author M. P. van Kouwen, author M. Barkelind, author M. Hocevar, author M. H. M. van Weert, author R. E. Algra, author E. P. A. M. Bakkers, author M. T. Björk, author H. Schmid, author H. Riel, author L. P. Kouwenhoven, and author V. Zwiller, @noop journal journal J. Nanophotonics volume 5, pages 053502 (year 2011)NoStop [Stettner et al.(2016)Stettner, Zimmermann, Loitsch, Döblinger, Regler, Mayer, Winnerl, Matich, Riedl, Kaniber, Abstreiter, Koblmüller, , and Finley]Stettner author author T. Stettner, author P. Zimmermann, author B. Loitsch, author M. Döblinger, author A. Regler, author B. Mayer, author J. Winnerl, author S. Matich, author H. Riedl, author M. Kaniber, author G. Abstreiter, author G. Koblmüller, , and author J. J. Finley, @noop journal journal Appl. Phys. Lett. volume 108, pages 011108 (year 2016)NoStop [Li et al.(2006)Li, Qian, Xiang, and Lieber]Li author author Y. Li, author F. Qian, author J. Xiang, and author C. M. Lieber, @noop journal journal Materials Today volume 9, pages 18 (year 2006)NoStop [Czaban et al.(2009)Czaban, Thompson, and LaPierre]Czaban2009 author author J. A. Czaban, author D. A. Thompson, and author R. R. LaPierre, @noop journal journal Nano Lett. volume 9, pages 148 (year 2009)NoStop [Nadj-Perge et al.(2010)Nadj-Perge, Frolov, Bakkers, and Kouwenhoven]NadjPerge author author S. Nadj-Perge, author S. Frolov, author E. P. A. M. Bakkers, and author L. P. Kouwenhoven, @noop journal journal Nature volume 468, pages 1084 (year 2010)NoStop [Frolov et al.(2013)Frolov, Plissard, Nadj-Perge, Kouwenhoven, and Bakkers]Frolov author author S. M. Frolov, author S. R. Plissard, author S. Nadj-Perge, author L. P. Kouwenhoven, and author E. P. A. M. Bakkers, @noop journal journal MRS Bulletin volume 38, pages 809 (year 2013)NoStop [Schroer et al.(2011)Schroer, Petersson, Jung, and Petta]Schroer author author M. D. Schroer, author K. D. Petersson, author M. Jung, and author J. R. Petta, 10.1103/PhysRevLett.107.176811 journal journal Phys. Rev. Lett. volume 107, pages 176811 (year 2011)NoStop [Pribiag et al.(2013)Pribiag, Nadj-Perge, Frolov, van den Berg, van Weperen, Plissard, Bakkers, and Kouwenhoven]Pribiag author author V. S. Pribiag, author S. Nadj-Perge, author S. M. Frolov, author I. van den Berg, J W G., author I. van Weperen, author S. R. Plissard, author E. P. A. M. Bakkers, and author L. P. Kouwenhoven, @noop journal journal Nature Nanotechnology volume 8, pages 170 (year 2013)NoStop [Miladi ćć et al.(2020)Miladi ćć, Stipsi ćć, Dobardžži ćć, and Milivojevi ćć]Miladi author author S. Miladi ćć, author P. Stipsi ćć, author E. Dobardžži ćć, and author M. Milivojevi ćć, 10.1103/PhysRevB.101.155307 journal journal Phys. Rev. B volume 101, pages 155307 (year 2020)NoStop [Nadj-Perge et al.(2012)Nadj-Perge, Pribiag, van den Berg, Zuo, Plissard, Bakkers, Frolov, and Kouwenhoven]NadjPergePRL author author S. Nadj-Perge, author V. S. Pribiag, author J. W. G. van den Berg, author K. Zuo, author S. R. Plissard, author E. P. A. M. Bakkers, author S. M. Frolov, and author L. P. Kouwenhoven, 10.1103/PhysRevLett.108.166801 journal journal Phys. Rev. Lett. volume 108, pages 166801 (year 2012)NoStop [Wójcik et al.(2014)Wójcik, Adamowski, Spisak, and Wołoszyn]Wojcik2014 author author P. Wójcik, author J. Adamowski, author B. J. Spisak, and author M. Wołoszyn, @noop journal journal Journal of Applied Physics volume 115, pages 104310 (year 2014)NoStop [Dresselhaus(1955)]Dresselhaus author author G. Dresselhaus, 10.1103/PhysRev.100.580 journal journal Phys. Rev. volume 100, pages 580 (year 1955)NoStop [Rashba(1960)]Rashba author author E. I. Rashba, @noop journal journal Phys. Solid State volume 2, pages 1109 (year 1960)NoStop [Campos et al.(2018)Campos, Faria Junior, Gmitra, Sipahi, and Fabian]Campos author author T. Campos, author P. E. Faria Junior, author M. Gmitra, author G. M. Sipahi, and author J. Fabian, 10.1103/PhysRevB.97.245402 journal journal Phys. Rev. B volume 97, pages 245402 (year 2018)NoStop [Kokurin(2015)]Kokurin2015 author author I. A. Kokurin, @noop journal journal Physica E volume 74, pages 264 (year 2015)NoStop [Kokurin(2014)]Kokurin2014 author author I. A. Kokurin, @noop journal journal Solid State. Commun. volume 195, pages 49 (year 2014)NoStop [Wójcik et al.(2018)Wójcik, Bertoni, and Goldoni]Wojcik2018 author author P. Wójcik, author A. Bertoni, and author G. Goldoni, 10.1103/PhysRevB.97.165401 journal journal Phys. Rev. B volume 97, pages 165401 (year 2018)NoStop [Wójcik et al.(2021)Wójcik, Bertoni, and Goldoni]Wojcik2021 author author P. Wójcik, author A. Bertoni, and author G. Goldoni, https://link.aps.org/doi/10.1103/PhysRevB.103.085434 journal journal Phys. Rev. B volume 103, pages 085434 (year 2021)NoStop [Escribano et al.(2020)Escribano, Yeyati, and Prada]Escribano author author S. D. Escribano, author A. L. Yeyati, and author E. Prada, 10.1103/PhysRevResearch.2.033264 journal journal Phys. Rev. Res. volume 2, pages 033264 (year 2020)NoStop [Wójcik et al.(2019)Wójcik, Bertoni, and Goldoni]Wojcik2019 author author P. Wójcik, author A. Bertoni, and author G. Goldoni, @noop journal journal Applied Physics Letters volume 114, pages 073102 (year 2019)NoStop [Furthmeier et al.(2016)Furthmeier, Dirnberger, Gmitra, Bayer, Forsch, Hubmann, Schüller, Reiger, Fabian, Korn, and Bougeard]Furthmeier author author S. Furthmeier, author F. Dirnberger, author M. Gmitra, author A. Bayer, author M. Forsch, author J. Hubmann, author C. Schüller, author E. Reiger, author J. Fabian, author T. Korn, and author D. Bougeard, @noop journal journal Nat Commun. volume 7, pages 12413 (year 2016)NoStop [van Weperen et al.(2015)van Weperen, Tarasinski, Eeltink, Pribiag, Plissard, Bakkers, Kouwenhoven, and Wimmer]vanWeperen2015 author author I. van Weperen, author B. Tarasinski, author D. Eeltink, author V. S. Pribiag, author S. R. Plissard, author E. P. A. M. Bakkers, author L. P. Kouwenhoven, and author M. Wimmer, @noop journal journal Phys. Rev. B volume 91, pages 201413(R) (year 2015)NoStop [Kammhuber et al.(2017)Kammhuber, Cassidy, Pei, Nowak, Vuik, Car, Plissard, Bakkers, Wimmer, and Kouwenhoven]Kammhuber2017 author author J. Kammhuber, author M. C. Cassidy, author F. Pei, author M. P. Nowak, author A. Vuik, author D. Car, author S. R. Plissard, author E. P. A. M. Bakkers, author M. Wimmer, and author L. P. Kouwenhoven, @noop journal journal Nat Commun. volume 8, pages 478 (year 2017)NoStop [Dhara et al.(2009)Dhara, Solanki, Singh, Narayanan, Chaudhari, Gokhale, Bhattacharya, and Deshmukh]Dhara2009 author author S. Dhara, author H. S. Solanki, author V. Singh, author A. Narayanan, author P. Chaudhari, author M. Gokhale, author A. Bhattacharya, and author M. M. Deshmukh, 10.1103/PhysRevB.79.121311 journal journal Phys. Rev. B volume 79, pages 121311 (year 2009)NoStop [Scherübl et al.(2016)Scherübl, Fülöp, Madsen, Nygård, and Csonka]Scherubl2016 author author Z. Scherübl, author G. m. H. Fülöp, author M. H. Madsen, author J. Nygård, and author S. Csonka, 10.1103/PhysRevB.94.035444 journal journal Phys. Rev. B volume 94, pages 035444 (year 2016)NoStop [Liang and Gao(2012)]Liang2012 author author D. Liang and author X. P. A. Gao, @noop journal journal Nano Lett. volume 12, pages 3263–3267 (year 2012)NoStop [Gazibegovic et al.(2017)Gazibegovic, Car, Zhang, Balk, Logan, de Moor, Cassidy, Schmits, Xu, Wang, Krogstrup, Op het Veld, Zuo, Vos, Shen, Bouman, Shojaei, Pennachio, Lee, van Veldhoven, Koelling, Verheijen, Kouwenhoven, Palmstrøm, and Bakkers]gazibegovic_epitaxy_2017 author author S. Gazibegovic, author D. Car, author H. Zhang, author S. C. Balk, author J. A. Logan, author M. W. A. de Moor, author M. C. Cassidy, author R. Schmits, author D. Xu, author G. Wang, author P. Krogstrup, author R. L. M. Op het Veld, author K. Zuo, author Y. Vos, author J. Shen, author D. Bouman, author B. Shojaei, author D. Pennachio, author J. S. Lee, author P. J. van Veldhoven, author S. Koelling, author M. A. Verheijen, author L. P. Kouwenhoven, author C. J. Palmstrøm, and author E. P. A. M. Bakkers, 10.1038/nature23468 journal journal Nature volume 548, pages 434 (year 2017)NoStop [Krogstrup et al.(2015)Krogstrup, Ziino, Chang, Albrecht, Madsen, Johnson, Nygård, Marcus, and Jespersen]krogstrup_epitaxy_2015 author author P. Krogstrup, author N. L. B. Ziino, author W. Chang, author S. M. Albrecht, author M. H. Madsen, author E. Johnson, author J. Nygård, author C. M. Marcus, and author T. S. Jespersen, 10.1038/nmat4176 journal journal Nat. Mater. volume 14, pages 400 (year 2015)NoStop [Chang et al.(2015)Chang, Albrecht, Jespersen, Kuemmeth, Krogstrup, Nygård, and Marcus]chang_hard_2015 author author W. Chang, author S. M. Albrecht, author T. S. Jespersen, author F. Kuemmeth, author P. Krogstrup, author J. Nygård, and author C. M. Marcus, 10.1038/nnano.2014.306 journal journal Nat. Nano. volume 10, pages 232 (year 2015)NoStop [Kjaergaard et al.(2016)Kjaergaard, Nichele, Suominen, Nowak, Wimmer, Akhmerov, Folk, Flensberg, Shabani, Palmstrøm, and Marcus]kjaergaard_quantized_2016 author author M. Kjaergaard, author F. Nichele, author H. J. Suominen, author M. P. Nowak, author M. Wimmer, author A. R. Akhmerov, author J. A. Folk, author K. Flensberg, author J. Shabani, author C. J. Palmstrøm, and author C. M. Marcus, 10.1038/ncomms12841 journal journal Nat. Commun. volume 7, pages 12841 (year 2016)NoStop [Mourik et al.(2012)Mourik, Zuo, Frolov, Plissard, Bakkers, and Kouwenhoven]mourik_signatures_2012 author author V. Mourik, author K. Zuo, author S. M. Frolov, author S. R. Plissard, author E. P. a. M. Bakkers, and author L. P. Kouwenhoven, 10.1126/science.1222360 journal journal Science volume 336, pages 1003 (year 2012)NoStop [Deng et al.(2012)Deng, Yu, Huang, Larsson, Caroff, and Xu]deng_anomalous_2012 author author M. T. Deng, author C. L. Yu, author G. Y. Huang, author M. Larsson, author P. Caroff, and author H. Q. Xu, 10.1021/nl303758w journal journal Nano Lett. volume 12, pages 6414 (year 2012)NoStop [Albrecht et al.(2016)Albrecht, Higginbotham, Madsen, Kuemmeth, Jespersen, Nygård, Krogstrup, and Marcus]albrecht_exponential_2016 author author S. M. Albrecht, author A. P. Higginbotham, author M. Madsen, author F. Kuemmeth, author T. S. Jespersen, author J. Nygård, author P. Krogstrup, and author C. M. Marcus, 10.1038/nature17162 journal journal Nature volume 531, pages 206 (year 2016)NoStop [Zhang et al.(2018)Zhang, Liu, Gazibegovic, Xu, Logan, Wang, van Loo, Bommer, de Moor, Car, Veld, van Veldhoven, Koelling, Verheijen, Pendharkar, Pennachio, Shojaei, Lee, Palmstrom, Bakkers, Sarma, and Kouwenhoven]zhang_quantized_2017 author author H. Zhang, author C.-X. Liu, author S. Gazibegovic, author D. Xu, author J. A. Logan, author G. Wang, author N. van Loo, author J. D. S. Bommer, author M. W. A. de Moor, author D. Car, author R. L. M. O. h. Veld, author P. J. van Veldhoven, author S. Koelling, author M. A. Verheijen, author M. Pendharkar, author D. J. Pennachio, author B. Shojaei, author J. S. Lee, author C. J. Palmstrom, author E. P. A. M. Bakkers, author S. D. Sarma, and author L. P. Kouwenhoven, @noop journal journal Nature volume 556, pages 74 (year 2018)NoStop [Finck et al.(2013)Finck, Van Harlingen, Mohseni, Jung, and Li]finck_anomalous_2013 author author A. D. K. Finck, author D. J. Van Harlingen, author P. K. Mohseni, author K. Jung, and author X. Li, 10.1103/PhysRevLett.110.126406 journal journal Phys. Rev. Lett. volume 110, pages 126406 (year 2013)NoStop [Oreg et al.(2010)Oreg, Refael, and von Oppen]oreg_helical_2010 author author Y. Oreg, author G. Refael, and author F. von Oppen, 10.1103/PhysRevLett.105.177002 journal journal Phys. Rev. Lett. volume 105, pages 177002 (year 2010)NoStop [Sau et al.(2010)Sau, Lutchyn, Tewari, and Das Sarma]sau_generic_2010 author author J. D. Sau, author R. M. Lutchyn, author S. Tewari, and author S. Das Sarma, 10.1103/PhysRevLett.104.040502 journal journal Phys. Rev. Lett. volume 104, pages 040502 (year 2010)NoStop [Lutchyn et al.(2010)Lutchyn, Sau, and Das Sarma]Lutchyn author author R. M. Lutchyn, author J. D. Sau, and author S. Das Sarma, 10.1103/PhysRevLett.105.077001 journal journal Phys. Rev. Lett. volume 105, pages 077001 (year 2010)NoStop [Roth et al.(1959)Roth, Lax, and Zwerdling]Roth-Lax author author L. Roth, author B. Lax, and author S. Zwerdling, @noop journal journal Phys. Rev. volume 114, pages 90 (year 1959)NoStop [Lommer et al.(1985)Lommer, Malcher, and Rössler]Lommer author author G. Lommer, author F. Malcher, and author U. Rössler, 10.1103/PhysRevB.32.6965 journal journal Phys. Rev. B volume 32, pages 6965 (year 1985)NoStop [Kiselev et al.(1998)Kiselev, Ivchenko, and Rössler]Kiselev author author A. A. Kiselev, author E. L. Ivchenko, and author U. Rössler, 10.1103/PhysRevB.58.16353 journal journal Phys. Rev. B volume 58, pages 16353 (year 1998)NoStop [Gawarecki and Zieliński(2020)]Gawarecki2020 author author K. Gawarecki and author M. Zieliński, @noop journal journal Scientific Reports volume 10, pages 22001 (year 2020)NoStop [vanWeperen et al.(2013)vanWeperen, Plissard, Bakkers, Frolov, and Kouwenhoven]vanWeperen2013 author author I. vanWeperen, author S. R. Plissard, author E. P. A. M. Bakkers, author S. M. Frolov, and author L. P. Kouwenhoven, @noop journal journal Nano Lett. volume 13, pages 387 (year 2013)NoStop [Vaitiek ėėnas et al.(2018)Vaitiek ėėnas, Deng, Nygård, Krogstrup, and Marcus]Marcus2018 author author S. Vaitiek ėėnas, author M.-T. Deng, author J. Nygård, author P. Krogstrup, and author C. M. Marcus, 10.1103/PhysRevLett.121.037703 journal journal Phys. Rev. Lett. volume 121, pages 037703 (year 2018)NoStop [Winkler et al.(2017)Winkler, Varjas, Skolasinski, Soluyanov, Troyer, and Wimmer]Winkler2017 author author G. W. Winkler, author D. Varjas, author R. Skolasinski, author A. A. Soluyanov, author M. Troyer, and author M. Wimmer, 10.1103/PhysRevLett.119.037701 journal journal Phys. Rev. Lett. volume 119, pages 037701 (year 2017)NoStop [Bertoni et al.(2011)Bertoni, Royo, Mahawish, and Goldoni]Bertoni2011 author author A. Bertoni, author M. Royo, author F. Mahawish, and author G. Goldoni, 10.1103/PhysRevB.84.205323 journal journal Phys. Rev. B volume 84, pages 205323 (year 2011)NoStop [Iorio et al.(2019)Iorio, Rocci, Bours, Carrega, Zannier, Sorba, Roddaro, Giazotto, and Strambini]Ioro author author A. Iorio, author M. Rocci, author L. Bours, author M. Carrega, author V. Zannier, author L. Sorba, author S. Roddaro, author F. Giazotto, and author E. Strambini, https://doi.org/10.1021/acs.nanolett.8b02828 journal journal Nano Letters volume 19, pages 652 (year 2019), http://arxiv.org/abs/https://doi.org/10.1021/acs.nanolett.8b02828 https://doi.org/10.1021/acs.nanolett.8b02828 NoStop [Wójcik et al.(2021)Wójcik, Bertoni, and Goldoni]Wojcik_anizotropy author author P. Wójcik, author A. Bertoni, and author G. Goldoni, 10.1103/physrevb.103.085434 journal journal Physical Review B volume 103 (year 2021), 10.1103/physrevb.103.085434NoStop [Vezzosi et al.(2022)Vezzosi, Bertoni, and Goldoni]Vezzosi2022 author author A. Vezzosi, author A. Bertoni, and author G. Goldoni, 10.1103/PhysRevB.105.245303 journal journal Phys. Rev. B volume 105, pages 245303 (year 2022)NoStop
http://arxiv.org/abs/2307.07398v1
20230714151620
Galaxy cluster mass accretion rates from IllustrisTNG
[ "Michele Pizzardo", "Margaret J. Geller", "Scott J. Kenyon", "Ivana Damjanov", "Antonaldo Diaferio" ]
astro-ph.CO
[ "astro-ph.CO" ]
Department of Astronomy and Physics, Saint Mary's University, 923 Robie Street, Halifax, NS-B3H3C3, Canada Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA-02138, USA Dipartimento di Fisica, Università di Torino, via P. Giuria 1, I-10125 Torino, Italy Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Torino, via P. Giuria 1, I-10125 Torino, Italy We use simulated cluster member galaxies from Illustris TNG300-1 to develop a technique for measuring the galaxy cluster mass accretion rate (MAR) that can be applied directly to observations. We analyze 1318 IllustrisTNG clusters of galaxies with M_200c>10^14M_⊙ and 0.01≤ z ≤ 1.04. The MAR we derive is the ratio between the mass of a spherical shell located in the infall region and the time for the infalling shell to accrete onto the cluster core. At fixed redshift, an ∼ 1 order of magnitude increase in M_200c results in a comparable increase in MAR. At fixed mass, the MAR increases by a factor of ∼ 5 from z=0.01 to z=1.04. The MAR estimates derived from the caustic technique are unbiased and lie within 20% of the MAR's based on the true mass profiles. This agreement is crucial for observational derivation of the MAR. The IllustrisTNG results are also consistent with (i) previous merger tree approaches based on N-body dark matter only simulations and with (ii) previously determined MAR's of real clusters based on the caustic method. Future spectroscopic and photometric surveys will provide MAR's of enormous cluster samples with mass profiles derived from both spectroscopy and weak lensing. Combined with future larger volume hydrodynamical simulations that extend to higher redshift, the MAR promises important insights into evolution of massive systems of galaxies. Galaxy cluster mass accretion rates from IllustrisTNG Michele Pizzardo <ref>[email protected] Margaret J. Geller <ref> Scott J. Kenyon <ref> Ivana Damjanov <ref> Antonaldo Diaferio <ref>,<ref> Received date / Accepted date ===================================================================================================================================================== § INTRODUCTION In the standard ΛCDM cosmological model, clusters of galaxies form hierarchically by progressive accretion of matter onto density peaks <cit.>. The outskirts of clusters of galaxies are potentially powerful testbeds for this accretion predicted by structure formation models <cit.>. The mass accretion rate (MAR) probes the outer (infall) region of galaxy clusters because the accretion draws material from radii ≳ R_200c. R_200c, a common proxy for the virial radius, is the radius enclosing an average mass density 200 times the critical density of the Universe at the appropriate redshift. For radii ≳ R_200c clusters are not in dynamical equilibrium <cit.>. The MAR is naturally linked to the splashback radius, the average location of the first apocenter of infalling material <cit.>. This radius is located at ∼ (1-2)R_200c and decreases with increasing MAR, mass, and redshift <cit.>. N-body simulations show that MAR's correlate with other properties of cluster halos, including concentration <cit.>, shape <cit.>, degree of internal relaxation <cit.>, and fraction of substructure <cit.>. Many theoretical studies investigate the MAR in ΛCDM cosmologies. The first approaches to computing the MAR in ΛCDM cosmologies were analytic models based on the extended Press-Schechter (EPS) formalism or on Monte-Carlo generated merger trees <cit.>. More recent approaches build merger trees from N-body dark matter only simulations <cit.> or from semi-analytical models calibrated with N-body simulations <cit.>. These models imply that halos with mass ≳ 10^14M_⊙ accrete ∼ 30%-50% of their mass over the redshift range z∼ 0.5 to z∼ 0. The MAR is correlated with both halo mass and redshift. Because we observe a cluster at a single redshift and thus cannot measure its history directly, the merger tree approaches are not directly applicable to cluster observations. Soon, spectroscopic and photometric missions will provide huge samples of clusters with both dense spectroscopy and weak-lensing maps extending to large cluster-centric radius. The cluster samples will also cover redshifts ≲ 2. Both weak-lensing <cit.> and the caustic technique <cit.> will allow estimates of cluster mass profiles without assuming dynamical equilibrium. The use of IllustrisTNG <cit.> enables direct application of the caustic technique to track the MAR for galaxy clusters with z ≲ 1. In contrast with a MAR recipe based on purely N-body simulations <cit.>, IllustrisTNG enables the use of galaxies as tracers of the dynamical evolution of galaxy clusters. Following <cit.> we define the MAR as the ratio between the mass of an infalling shell and the infall time. <cit.> and <cit.> compute the MAR based on mass profiles determined from the caustic technique applied to dense spectroscopic surveys of the infall regions of observed galaxy clusters <cit.>. The results are consistent with ΛCDM predictions. <cit.> use IllustrisTNG to calibrate a statistical platform for application of the caustic technique. Their approach is based on mock galaxy cluster member catalogs. The caustic technique calibrated by IllustrisTNG returns the true cluster mass profile within 10% in the radial range (0.6-4.2)R_200c and redshift range 0.01-1.04. We build on the approach of <cit.> to develop an IllustrisTNG recipe for computing the cluster MAR. The recipe is based on simulated galaxies rather than dark matter particles in contrast with previous approaches. We demonstrate that the caustic mass profiles based on galaxy mock catalogs <cit.> yield reliable estimates of the true MAR's. They are also consistent with estimates from previous theoretical and observations investigations <cit.>. Sect. <ref> describes the approach to estimating the MAR. Sect. <ref> describes the IllustrisTNG cluster sample. We derive the average radial velocity profiles, fundamental ingredients for the computation of the MAR. In Sect. <ref> we derive the mass of the infalling shell and the infall time. Sect. <ref> compares the true MAR's with the caustic MAR's. Sect. <ref> compares these MAR results with previous models and observations. We also compare galaxy and dark matter MAR's. Finally we outline future challenges and prospects. We conclude in Sect. <ref>. § RECIPE FOR THE ESTIMATION OF THE MAR We define the MAR of clusters of galaxies as the accretion of matter within a spherical shell in the infall region onto the cluster core: MAR= d M/d t = M_ shell/t_ inf, where M_ shell is the mass of the infalling shell and t_ inf is the time for the shell to accrete onto the cluster core. We locate the infalling shell and determine its width from the radial velocity profile of cluster galaxies obtained from simulations (Sect. <ref>). We compute the true mass of the infalling shell using the true total mass profile of clusters, including dark matter, gas, stars, and black holes. For comparison, we also measure the mass profiles from the caustic technique for the same clusters and compute the caustic mass of the infalling shell (Sect. <ref>). This approach enables direct application to observations. We compute the infall time by solving the equation of radial infall with a nonconstant acceleration derived from the true gravitational potential of the cluster. The parameters of the equation are the initial infall velocity of the infalling shell, v_ inf, the center of the infalling shell equivalent to the radial location of v_ inf, R_v_min, and the radius that defines the cluster core, R_200c (Sect. <ref>). § CLUSTER SAMPLE, MASS AND VELOCITY PROFILES Basic inputs to the mass accretion rate include the cluster mass and radial velocity profiles. A large sample of clusters <cit.> from the IllustrisTNG simulations <cit.> is the basis for the determination of the true 3D and projected caustic cumulative mass profiles. As a basis for computing the mass accretion rate, we compute the average cluster radial velocity profile at each redshift. This profile has a characteristic minimum that ultimately determines the accretion rate. Sect. <ref> describes the sample of clusters extracted from IllustrisTNG by <cit.>. We summarize the derivation of the true and caustic mass profiles for each cluster. Sect. <ref> describes the determination of the radial velocity profiles. §.§ Cluster sample and mass profiles <cit.> extract cluster samples from the TNG300-1 run of the IllustrisTNG simulations <cit.>, a set of gravo-magnetohydrodynamical simulations based on the ΛCDM model. Table <ref> lists the cosmological parameters of the simulations. TNG300-1 is the baryonic run with the highest resolution among the runs with the largest simulated volumes. The simulation has a comoving box size of 302.6 Mpc. TNG300-1 contains 2500^3 dark matter particles with mass m_ DM = 5.88 × 10^7 M_⊙ and the same number of gas cells with average mass m_b = 1.10× 10^7 M_⊙. <cit.> use group catalogues compiled by the IllustrisTNG Collaboration to extract all of the Friends-of-Friends (FoF) groups in TNG300-1 with M_200c^3D > 10^14M_⊙. There are 1697 clusters in the 11 redshift bins: z=0.01, 0.11, 0.21, 0.31, 0.42, 0.52, 0.62, 0.73, 0.82, 0.92, and 1.04. For ∼ 22% of the clusters, consistent application of the caustic technique is not possible <cit.>. We remove these clusters from the sample. Table <ref> describes the remaining 1318 clusters we analyze here. The Table includes the number of clusters in each redshift bin, the median and the interquartile range of their masses M_200c^3D, and the minimum and maximum M_200c^3D at each redshift. Our goal is assessment of the caustic technique <cit.> as a basis for reliable estimates of the true MAR. Estimation of the MAR requires robust knowledge of the cluster mass at large cluster-centric distances, ≳ 2R_200c, where virial equilibrium does not hold (Sects. <ref> and <ref>). In the extended radial range (0.6-4.2)R_200 the caustic technique returns an unbiased estimate of the mass with better than 10% accuracy and with a relative uncertainty of 23% provided that the velocity field of the cluster outer region is sufficiently well sampled <cit.>. The caustic technique is independent of equilibrium assumptions. The true 3D and caustic MAR's rest on determination of the true and caustic mass profiles for each cluster in the sample (Table <ref>). <cit.> compute the true cumulative mass profile (from now on, the “true mass profile”) for each cluster from the 3D distribution of matter extracted from raw snapshots. These profiles include all matter species: dark matter, gas, stars, and black holes. <cit.> compute the true mass profile for each cluster, M^3D(r), in 200 logarithmically spaced bins covering the radial range (0.1-10)R_200c^3D. These profiles define R_200c^3D and M_200c^3D for each cluster. The basis for the estimation of the caustic cumulative mass profile (from now on, the “caustic mass profile”) is the r-v_ los diagram, the line-of-sight velocity relative to the cluster median as a function of r <cit.>. The r-v_ los diagram is based on catalogues of simulated cluster galaxies that include the right ascension α, declination δ, and total redshift z along the line of sight. <cit.> associate a realistic galaxy mock redshift survey with each simulated cluster. These catalogues include contaminating background and foreground galaxies. <cit.> build the catalogues by identifying galaxies with Subfind substructures that have stellar mass > 10^8M_⊙. This choice of mass limit mimics observable galaxies and it assures optimal performance of the caustic technique. Finally, <cit.> apply the caustic technique in the same unconstrained way to all of the mock catalogues and obtain a single calibrated caustic mass profile for each cluster. These profiles define R_200c^C and M_200c^C for each cluster. §.§ Radial velocity profiles The average cluster galaxy velocity profile along the radial direction is fundamental to the evaluation of the mass accretion rate (Sect. <ref>). We compute the set of individual cluster radial velocity profiles based on the comoving position of simulated galaxies with respect to the cluster center, 𝐫_c,i, and the galaxy peculiar velocity, 𝐯_p,i. From the 3D volume we select only galaxies with cluster-centric distances < 10 R_200c^3D. We compute the radial velocity of each galaxy: v_ rad,i=[ v_p,i + H(z_s)a(z_s) r_c,i]· r_c,i/r_c,i, where H(z_s) and a(z_s) are the Hubble function and the scale factor at the redshift z_s of the snapshot. We compute the mean radial velocity profile based on the galaxies within 100 linearly spaced radial bins covering the range (0,10)R_200c^3D. For each redshift snapshot, we compute a single average radial velocity profile. For each of the 100 radial bins of r/R_200c^3D, we compute the mean and the standard deviation of all of the velocities within all of the clusters velocities in each bin thus obtaining the mean radial velocity profile and the dispersion around it. The solid blue curves in Fig. <ref> show mean radial velocity profiles at three example redshifts: z=0.01, z=0.62, and z=1.04 (in the left, middle, and right panels, respectively). We smooth over statistical fluctuations in the profile by applying a Savitsky-Golay filter with a 10 radial bin window <cit.>. The dash-dotted curves show the resulting smoothed profiles. The shaded light blue band shows the error in the corresponding smoothed profiles. The curves show three clear regimes. Within ∼ 1 R_200c^3D, the average radial velocity has a plateau at zero or small velocity. This region is virialized, with no net infall. At larger distances from the cluster center, ∼ (1-4)R_200c^3D, the infall region, the radial velocity is definitely non-zero and directed toward the cluster center. Finally, at distances ≳ 4R_200 the galaxy velocity is directed out of the cluster because the galaxies are still coupled to the universal Hubble flow. We use the smoothed radial velocity profiles to identify the turnaround radius where galaxies depart from the Hubble flow as a result of the cluster potential <cit.>. Starting from the largest cluster-centric distances, we identify the turnaround radius with the first intersection between the profile and the axis v_ rad=0. The vertical solid lines in Fig. <ref> show the turnaround radii we measure. Fig. <ref> shows the turnaround radii as a function of redshift. The turnaround radii are in the range (4.52-4.78)R_200c and they decrease by ∼ 3.3% as the redshift increases from z=0.01 to z=1.04. To compute the error in the turnaround radius, we bootstrap 1000 samples of clusters at each redshift. We compute the turnaround radii of the associated average velocity profiles and then derive the error in the mean for the set of resulting radii. The errors are ∼ (0.07-0.8)%. Table <ref> shows that the median cluster mass generally decreases as the redshift increases because very massive systems are progressively less abundant at greater redshift. We check the impact of the changing distribution of cluster masses on the turnaround radii. For each redshift, we build homogeneous samples that include only clusters with mass M_200c^3D in the range (1.02-4.30)· 10^14 M_. The largest minimum and smallest maximum mass for the samples in the 11 redshift bins set this range (see last column of Table <ref>). According to the Kolmogorov-Smirnov test, these clipped samples share indistinguishable mass distributions; the p-values are in the range (0.4-0.9). For each redshift, we compute the average radial velocity profile for the clipped sample and locate the turnaround radius. The turnaround radii for these samples are within ≲ 1.5% of the results obtained from the full samples. They are also unbiased. The turnaround radii are insensitive to the difference among the distribution of cluster masses at different redshifts. The Meiksin <cit.> analytic approximation for calculating the density contrast within the turnaround radius is a remarkably good representation of the IllustrisTNG results. In this approach, the density contrast is δ(r) = 3M(r)/4πρ_bkg r^3 - 1, where M(r) is the mass of the cluster within a distance r from the center and ρ_bkg=Ω_mρ_c the background matter density. In the spherical collapse model, the radial velocity induced by the perturbation is v_ rad/Hr ≈Ω_m^0.6 P(δ) <cit.>. <cit.> approximates the function P(δ) with a non-polynomial; the overdensity within the turnaround radius where v_ rad/Hr=1, is δ_t, Meiksin = 3/2Ω_m^-1.2( 1+√(1+4Ω_m^1.2)). We use the true mass profiles of all of the individual clusters to identify the turnaround radius. In Fig. <ref> the dashed vertical lines show the median turnaround radii based on the <cit.> expression. Averaging over redshift, the Meiksin model overestimates the turnaround radii by a modest ∼ 5%. The relative difference between IllustrisTNG and Meiksin's model increases from ∼ 1-2% at z=(0.01-0.31) to ∼ 3-10% at higher redshift. The worsening of the agreement at high redshift occurs because the Illustris turnaround radius decreases by ∼ 3.3% from z=0.01 to z=1.04 whereas the Meiksin model implies a ∼ 8.8% increase. The agreement between the early <cit.> model and the simulations is remarkable given the enormous change in the sophistication of the understanding of the development of structure in the universe over these decades. Estimation of the MAR according to Eq. (<ref>) requires determination of the minimum of the radial velocity profile (see Sects. <ref> and <ref>). At each redshift, we measure the minimum velocity of the smoothed average radial profile. The first three columns of Table <ref> report the minimum radial velocity, v_min, and its cluster-centric location, R_v_min, for each redshift bin. We use bootstrapping to compute the error of R_v_min. We use homogeneous samples in mass extracted as described above to show that R_v_min is insensitive to differences in the distribution of cluster masses at different redshifts. Fig. <ref> shows v_min as a function of its R_v_min, colour coded by redshift. The minimum velocity and its cluster-centric radius are strongly correlated with redshift. From z=0.01 to z=1.04 the v_min of clusters with approximately equal mass (Table <ref>) increases by ∼ 100% in absolute value. The corresponding R_v_min decreases by ∼ 40%. Linear fits of the dependence of v_min on redshift and R_v_min on redshift are: v_min/ km s^-1 = -(240 ± 130) z - (244± 25), R_v_min/R_200c = -(0.919 ± 0.017) z + (2.61 ± 0.0044). Residuals relative to the fits are unbiased as a function of redshift. The average absolute values of the relative differences between the fits and the data are ∼ 2.4% and ∼ 4.1% for Eqs. (<ref>) and (<ref>), respectively. The behavior of v_min as a function of redshift is in qualitative agreement with the general picture of hierarchical structure formation. For equal mass clusters, systems at higher redshift form within higher overdensity peaks. Because of their denser environment, these clusters are denser within ∼ R_200c. Thus more mass is confined within smaller cluster-centric radii than for an equally massive lower redshift cluster. The consequently deeper gravitational potential within ∼ R_200c at greater redshift is the source of the larger minimum infall velocity located at smaller cluster-centric radius. The corresponding mass accretion is also larger in these higher redshift, denser environments. N-body simulations <cit.> show that at fixed mass, the accretion rate at z∼ 1 exceeds the rate at z∼ 0 by roughly an order of magnitude. § M_ SHELL AND T_ INF FROM 3D AND CAUSTIC MASS PROFILES Computation of the MAR of clusters requires an estimate of the mass of the infalling shell, M_ shell, and of the infall time for the cluster to accrete the shell, t_ inf. Section <ref> describes the determination of M_ shell. Section <ref> describes the derivation of the infall time, t_ inf. §.§ The infalling shell and its mass M_ shell The average radial velocity profiles computed in Sect. <ref> (see Fig. <ref>) are the basis for determining the cluster-centric radius of the infalling shell. The radial velocity profiles in Sect. <ref> show a clear infall pattern at radii ∼ (2-3)R_200c where the minimum radial velocity occurs. Table <ref> lists the average minimum radial velocity, v_min, and its radial location, R_v_min, at each redshift. To estimate the MAR, we identify boundaries of the infalling shell R_ shell,i (inner cluster-centric radius), and R_ shell,o (outer cluster-centric radius), where the smoothed radial velocity is 0.75 v_min. The shaded blue vertical regions in Fig. <ref> show these shells for three redshifts: z=0.01, z=0.62, and z=1.04 (left to right, respectively). Thin black horizontal lines within the shaded regions indicate 0.75 v_min. The choice, 0.75 v_min, reflects several features of the infall region. First, 0.75 v_min is in the middle of the range ∼ (0.6-0.9) v_min that defines the infall region. Larger fractions lead to unreasonably thin shells that are not a robust representation of the mass accreting onto the cluster. Smaller fractions overlap the virialized region of the cluster. The resulting MAR's for 0.75 v_min are statistically indistinguishable from the MAR's obtained from different choices in the range ∼ (0.6-0.9) v_min. The scatter among the MAR's in this radial range is ≲ 50%. Table <ref> (fifth column) shows the radial location of the infalling shell at each redshift. The cluster-centric distance of the shells decreases as redshift increases: at low redshifts, z≲ 0.3, the shells are located in the range ∼ (1.6-3.4)R_200c; at high redshift, z ≳ 0.73, the shells are in the range ∼ (1.3-2.6)R_200c. This variation results from the decrease of R_v_min with increasing redshift (Sec. <ref>, see Fig. <ref>). The width of the infalling shell, ∼ (1.2-1.3)R_200c, is nearly redshift independent. For each simulated cluster, we compute the true mass, M_ shell^3D, and caustic mass, M_ shell^C, of the infalling shell. For the appropriate radial range (Table <ref>) we compute two true and caustic masses that measure M(<R_ shell,i), the total mass within the inner boundary of the shell, and M(<R_ shell,o), the total mass within the outer boundary of the shell. The mass of the infalling shell is then the difference in these masses M_ shell=M(<R_ shell,o)-M(<R_ shell,i). Fig. <ref> shows the true M_ shell^3D's as a function of the true M_200c^3D's at three different redshifts: z=0.01, z=0.62, and z=1.04 (cyan, violet, and magenta, respectively). The squares with error bars show the medians and interquartile ranges of the simulated data in eight logarithmic mass bins covering the range (1-12.6)· 10^14M_⊙. The lines show a power law fit to the data. M_ shell^3D and M_200c^3D are strongly correlated at every redshift: a change of one order of magnitude in mass leads to a comparable change in the mass of the infalling shell. Kendall's test gives a correlation index of ∼ 0.44 with associated p-values in the range ∼ 10^-5-10^-28. This correlation is expected in the hierarchical cluster formation model, because at fixed redshift higher mass clusters reside within greater overdensities and thus there is more mass in their accreting shells. Conversely, M_ shell is not strongly correlated with redshift. Although the fits show that M_ shell^3D generally increases by ∼ 20-60% from z=0.01 to z=1.04, the error bars show that this correlation is not statistically significant. Coloured points in Fig. <ref> show M_ shell^3D as a function of redshift for individual clusters in 11 redshift bins. Data are coded by cluster mass M_200c^3D. The 11 black circles with error bars indicate the median and interquartile range of M_ shell^3D in each redshift bin. Figure <ref> shows the same trend observed in Fig. <ref> between M_ shell^3D and M_200c^3D. At fixed redshift M_ shell^3D increases with M_200c^3D: a change of ∼ 1 order of magnitude in M_200c^3D corresponds to an analogous change in M_ shell^3D. The median M_ shell^3D's, denoted by the black circles, are uncorrelated with redshift. The dependence of M_ shell and redshift is expected in the standard model of structure formation and evolution. In Appendix <ref> we explain how a complex interplay between the density and mass profiles as a function of redshift along with the slightly different cluster mass distribution at various redshifts tends to suppress any correlation between M_ shell and redshift. The caustic technique provides robust estimates of the mass profile of clusters in the infalling shell <cit.>. Points with error bars in Fig. <ref> show the median and the interquartile range of individual ratios between the caustic and true mass M_ shell^C /M_ shell^3D as a function of redshift. Because the infalling region is closer to the cluster center as the redshift increases (Table <ref>, fifth column), the median ratio slowly rises from 0.90 at z = 0 to 1.1 at z = 1.04[<cit.> show that on average in the radial range (0.6-4.2)R_200c the caustic mass is within the 10% of the true mass. However, the caustic to true mass ratio slowly decreases from the lower to the upper end of the calibrated range.]. The typical dispersion in the ratio is ∼ 35%. Thus the caustic technique slightly overestimates or underestimates the mass at smaller and larger cluster-centric distances, respectively. The trend is not statistically significant: Kendall's τ correlation coefficient is ≲ 0.05. The caustic mass M_ shell^C is a robust estimate of M_ shell^3D. Furthermore M_ shell^C has the same dependence on mass, redshift, and R_v_min as M_ shell^3D on average. §.§ The infall time t_ inf Based on the full 3D cluster properties at each redshift, we compute t_ inf, the time for an infalling shell to accrete onto the cluster core. We derive t_ inf by tracking the radial infall with nonconstant acceleration driven by the cluster gravitational potential. The locations of the minima in the average radial velocity profiles (Sect. <ref>) set the limit of the radial range where we compute the infall time. We ultimately employ the infall times based only on the true full 3D data because the uncertainty in these measures derived by application of the caustic method are too large. We compute the time for the center of the infalling shell to reach R_200c as a measure of t_ inf. We measure the infall time starting from the radius of the minimum infall velocity, R_v_min (Figure <ref> and Table <ref>). In other words, t_ inf is the time required for the center of the infalling shell initially located at R_v_min to reach R_200c. Over the radial range R_v_min to R_200c, the gravitational acceleration changes substantially. To compute t_ inf of each cluster in each redshift bin we use an iterative procedure. We divide the radial range (R_200c, R_v_min) into N+1=101 bins. The radial step between two contiguous bins is Δ r = (R_v_min-R_200c)/N. The N+1 steps n=0,1,...,N correspond to r_0=R_v_min, r_1=R_v_min-Δ r, ..., r_N=R_200c. At each step n<N, we calculate Δ t_n, the time for the center of the shell to move from r_n=R_v_min-nΔ r to r_n+1=R_v_min-(n+1)Δ r. Within this small radial interval with Δ r∼ 0.01R_200c≈ 0.01Mpc the gravitational acceleration is nearly constant. From this acceleration and the initial infall velocity of the shell, a_n and v_n respectively, we compute Δ t_n as the positive (physical) solution of the equation a_n/2Δ t_n^2 + v_n Δ t_n + Δ r = 0, that is[In Eq. (<ref>), a_n and v_n are negative. The alternative mathematical solution to that in Eq. (<ref>) returns a positive numerator, because √(v_n^2-2a_nΔ r) > -v_n, and hence a negative (unphysical) Δ t_n. ] Δ t_n = -v_n - √(v_n^2-2a_nΔ r)/a_n. We obtain the acceleration of the infalling shell by computing the cluster gravitational potential ϕ(r) based on the true cluster shell density profile ρ(r). The Poisson equation for an isolated spherical system with shell density profile ρ_I(r) is: ϕ(r) = -4π G [1/r∫_0^r ρ_I(r)r^2 dr +∫_r^+∞ρ_I(r)r dr]. The uniform cosmological background density, ⟨ρ(z)|=⟩Ω_M(z)ρ_c(z), exerts no net gravitational effect. Thus we compute the gravitational potential based on the mass density fluctuations by replacing ρ_I(r) with ρ(r)-⟨ρ|$⟩. The second integral is finite; at sufficiently large cluster-centric distances,∼10R_200c(see Appendix <ref>), the correlated cluster density is∼0. We replace the upper limit of the second integral of Eq. (<ref>) with10R_200c. From the true cluster potentialϕ(r)we compute the gravitational acceleration induced by the cluster,a(r), by simple differentiation:a(r)=-d/d rϕ(r). At each stepn, we seta_n(Eq. (<ref>) to the value ofa(r)at the position of the center of the infalling shell,a_n = a(R_v_min-nΔr). At the first stepn=0, the initial velocity isv_0=v_inf, wherev_infis the average cluster radial velocity of the infalling shell at that redshift (fourth column of Table <ref>). At each succeeding stepnwe increment the initial infall velocity of the shell by taking a constant acceleration in the small time step:v_n=v_n-1+a_n-1Δt_n-1. Application of Eq. (<ref>) fromn=0ton=N-1yields a set ofNtime steps,Δt_n= {0,...,N-1 }. The sum of these time steps is our estimate of the cluster infall time: t_ inf = ∑_n=0^N-1Δ t_n. Figure <ref> shows the resulting infall times for individual clusters as a function ofM_200c^3Din three redshift bins:z=0.01,0.62,and 1.04 (cyan, violet, and magenta, respectively). The squares with error bars show the medians and interquartile ranges of the simulated data in eight logarithmic mass bins covering the range(1-12.6)·10^14M_⊙(as in Fig. <ref>). The lines show a power law fit. The infallt_infis correlated with redshift. The increased cluster radial acceleration resulting from the larger cluster density at high redshift produces this correlation (Eq. <ref>). Equation <ref> shows that the increased acceleration produces a corresponding decrease in the infall time,t_inf. The squares in Fig. <ref> show the absence of correlation betweent_infand the cluster massM_200c^3Dat each redshift. The higher density of more massive clusters generates a larger acceleration decreasingt_inf. However, more massive clusters are also more extended thus increasing the radial range(R_200c, R_v_min)and correspondingly increasingt_inf. These effects result in a minimal dependence oft_infonM_200c^3D. § THE MASS ACCRETION RATE We apply Eq. <ref> to estimate the MAR's at different redshifts. We compute MAR^3D, the true MAR based onM_shell^3Dandt_inf, and MAR^C, the caustic MAR based onM_shell^C. As described earlier, we use 3D data to model the infall time. We compare MAR^Cwith MAR^3Dto assess the caustic technique as a robust method for estimating the true MAR. According to Eq. <ref>, the MAR^3Dof each individual cluster is the ratio between its respectiveM_shell^3D(see Sect. <ref>) andt_inf(see Sect. <ref>). Fig. <ref> shows the true MAR's of individual clusters as a function ofM_200c^3Din three different redshift bins:z=0.01, 0.62,and 1.04 (cyan, violet, and magenta, respectively). The squares with error bars show the medians and interquartile ranges of the simulated data in eight logarithmic mass bins covering the range(1-12.6)·10^14M_⊙. The lines show a power law fit. The MAR's increase both with increasing mass and with increasing redshift. A correlation between MAR andM_200cis expected because more massive clusters tend to be surrounded by larger amounts of mass. Figure <ref> shows that at fixed redshift a change of∼1order of magnitude inM_200c^3Dcorresponds to an analogous change in MAR^3D. This correlation with mass follows from the correlation betweenM_shellandM_200c(Sect. <ref>): Figs. <ref> and <ref> show that the increase of MAR^3DwithM_200c^3Dis consistent with the increase ofM_shell^3DwithM_200c^3D. Fig. <ref> shows that the infall time does not play a significant role in this correlation. In the hierarchical clustering paradigm, clusters of fixed mass at higher redshift reside within denser regions and thus accrete faster than clusters of the same mass at lower redshift. Figure <ref> shows that at fixed mass MAR^3Dincreases by a factor∼2.2fromz=0.01toz=0.62, and by a factor∼5fromz=0.01toz=1.04. This effect originates from the anticorrelation betweent_infand redshift (Sect. <ref>): Figs. <ref> and <ref> show that the increase of MAR^3Dwith redshift is consistent with this decrease oft_infwith redshift. We fit the individual MAR^3D's andM_200c^3D's at each redshift to the relation MAR =a(M_200c^3D/10^14M_⊙)^b(Table <ref>). The fits show the expected tight correlation between MAR and redshift. The coefficientathat measures the MAR at fixed mass increases with redshift by a factor∼7fromz=0.01toz=1.04. The slope of the power law is essentially redshift independent, with a mean slopeb̅=0.90±0.18. We fit the analytic relation proposed by <cit.> to the MAR as a function of redshift: MAR^3D = A [M_⊙yr^-1] (M_200c^3D/10^12M_⊙)^B (1+Cz)√(Ω_m0(1+z)^3+Ω_Λ 0). Because of the limited mass range we sample at greater redshifts, we fixB=b̅and we fit the median MAR^3Das a function of the medianM_200c^3Dand the redshift. The resulting coefficients are:A=323±49,B=b̅=0.90±0.18, andC=2.08±0.52. We compare these results with earlier work in Sect. <ref>. The MAR^Cof individual clusters is the ratio betweenM_shell^Cand the infall timet_inf^fit(M_200c^C)(Sect. <ref>), where we base the computation on the cluster caustic massM_200c^C. We compute the cluster infall timet_inf^fit(M_200c^C)by evaluating the power law fit of the individual infall times at the cluster caustic massM_200c^Cas a function of mass derived by true profiles at the given redshift (Fig. <ref>). The upper panel of Fig. <ref> shows the median and interquartile range of the true (blue squares and solid error bars) and caustic (red triangles and dotted error bars) MAR's as a function of redshift. Points in the lower panel show the ratios between the median values of the caustic and true MAR's. The figure shows that MAR^Cis a robust platform for estimating the true MAR^3Dat every redshift. The median MAR^C's are within 20% of the median MAR^3D's at each redshift. The caustic MAR's are also unbiased relative to the true MAR's. Thus the caustic technique provides accurate and robust estimates of the MAR's of clusters in the redshift range0.01-1.04. § DISCUSSION We use the Illustris TNG300-1 simulation to estimate the MAR of galaxy clusters based on the radial velocity profile of cluster galaxies and on the cluster total mass profile. The caustic technique provides robust and unbiased estimates of the true MAR's derived from 3D data over the redshift range0.01-1.04. We next (Sect. <ref>) compare the Illustris results with previous work on simulated and observed clusters. In Sect. <ref> we assess the bias between the galaxy MAR's and MAR's derived from the dark matter halos for the same sample of clusters drawn from the IllustrisTNG simulations. In Sect. <ref> we discuss future simulations and observational applications of the MAR. §.§ Comparison with Previous Results The dynamically motivated MAR recipe we develop differs significantly from previous merger tree approaches <cit.>. In contrast with a merger tree procedure that is not directly observable, the approach we outline allows the estimation of the MAR of real clusters and comparison with the true MAR's of comparable simulated systems. Most previous theoretical investigations of the MAR are based on N-body dark matter only simulations. These studies employ merger trees that trace the mass accretion of an halo atz=0back in time. The merger trees follow the change in mass between the halo descendant on the main branch identified atz_i>0and its most massive progenitor forz_i+ΔzwhereΔzis the time step. The details of the simulation including the halo fragmentation algorithm and the choice of mass and time step may affect the results <cit.>. Some studies <cit.> are based on the analytic or semi-analytic extended Press-Schechter formalism <cit.> calibrated with N-body simulations. The upper panel of Fig. <ref> compares MAR's from Fig. <ref> (squares) with two merger tree calculations based on N-body simulations by <cit.> (dotted lines) and <cit.> (solid lines). <cit.> extend the <cit.> investigation of the Millennium simulation <cit.> to Millennium II <cit.>. For the mass of each FoF halo, they take the sum of the masses of their subhalos. They then compute the MAR's from one snapshot to the next. The results by <cit.> are in excellent agreement with the earlier results of <cit.> and the later results of <cit.> and of <cit.>. <cit.> use a large set ofΛCDM N-body simulations run with GADGET2 <cit.>. They useM_200mas a halo mass proxy and compute the MAR's at time steps comparable with the halo dynamical time <cit.>[In Fig. <ref> we account for the differing Hubble parameters <cit.> and TNG300-1. We display the <cit.> results using the Colossus toolkit <cit.> with the TNG300-1 cosmology.]. Merger-tree MAR's depend on the subhalo finder, the merger tree builder, and the definition of MAR. The difference between the two merger tree models in the upper panel of Fig. <ref> shows the impact of these underlying differences. At each time step a merger tree MAR generally results from the difference between the mass of the descendant and the mass of the most massive progenitor. The most massive progenitor may not be the main branch progenitor <cit.> and thus the MAR is a lower limit. The halo mass definition may also vary. In the <cit.> model the mass is the sum of the masses of the subhalos; in <cit.> the mass definition isM_200m, the mass enclosed within a sphere centered on the halo center with matter density equal to 200 times the background matter density. The choice of time step can also affect the MAR's <cit.>. In Sect. <ref>, we demonstrate that the use of dark matter only simulations may also bias the MAR's toward lower values, but the bias is small. The MAR recipe we develop depends on the choice of a radial velocity threshold to select the infalling shell width (see Sect. <ref>) and on a prescription for computing the shell infall time (see Sect. <ref>). Variations in these approximations can change the average MAR's by∼50%(Sect. <ref>). Fig. <ref> compares the fits to the Illustris MAR's using Eq. (<ref>) (see Sect. <ref>) (solid line) with <cit.> (dotted line) and <cit.> (dashed line), atz=0.31. In all models, the MAR increases with cluster mass. Because <cit.> and <cit.> do not report errors in their fits we assume fractional errors comparable with ours (shadowed areas). The slope of the IllustrisTNG MAR then agrees with <cit.> and <cit.>. When averaged over the entire redshift range, the slope of the Illustris MAR's as a function ofM_200c^3Dis0.90±0.18(Sect. <ref>) in agreement with the slopes of 1.1-1.2 obtained by <cit.> and <cit.>. The IllustrisTNG MAR's generally exceed the merger tree MAR's at every redshift (upper panel of Fig. <ref>). Again assuming that the fractional errors for <cit.> and <cit.> are comparable with the IllustrisTNG errors, the 50 -70% difference in the rates are within the 2σerror. The shaded regions in Fig. <ref> indicate the general consistency of the results. The difference between the IllustrisTNG and merger tree MAR's increases as the redshift increases, but so does the error (Table <ref>). The parameterCof Eq. (<ref>) characterizes the dependence of the MAR on redshift. The IllustrisTNG simulations yieldC=2.08 ±0.52(Sect. <ref>), whereas <cit.> obtainC=1.17; the difference is≲2σ. The qualitative agreement between the IllustrisTNG and merger tree approaches is reassuring given the substantial differences in the approach to the computation of the MAR's. Based on the spherical accretion prescription proposed by <cit.>, <cit.> develop the first systematic approach to estimating the MAR's of real clusters. <cit.> compute the MAR as the ratio between the mass of an infalling shell and the infall time. Their approach is similar to the IllustrisTNG approach we follow, but they use aΛCDM N-body dark matter only simulation. <cit.> apply the <cit.> recipe to ten stacked clusters from the HectoMAP redshift survey <cit.>. They also derive MAR's based on theΛCDM N-body simulation L-CoDECS <cit.>. The bottom panel of Fig. <ref> shows the simulated MAR's (triangles) and the observed MAR's of HectoMAP stacked clusters (stars) as a function of mass and colour coded by redshift compared with TNG300-1 (squares). The MAR's of the <cit.> model are consistent with the IllustrisTNG MAR's; they are≲10%below the IllustrisTNG MAR's. The agreement between the IllustrisTNG and <cit.> MAR estimates suggests that the cluster accretion physics is insensitive to the detailed determination of the infalling shell width, mass, and infall velocity. The HectoMAP MAR's are also consistent with the TNG300-1 MAR's. The agreement between MAR's of observed clusters and TNG300-1 may reflect the use of simulated galaxies to calibrate the MAR derived from IllustrisTNG. §.§ The Dark matter MAR Previous theoretical work on the MAR is based on N-body dark matter only simulations. Galaxies are generally biased tracers of the underlying distribution of dark matter <cit.>. With Illustris TNG300-1 we can measure the bias directly by estimating the MAR for both galaxies and dark matter for the identical sample of clusters. We use 3D data from the simulation for this test. We begin by locating the infalling shell based on the dark matter. We follow the procedure outlined in Sect. <ref> using the average radial velocity profiles of the dark matter particles. For each redshift bin, we compute the dark matter radial velocity profile of the individual clusters in 200 logarithmically spaced bins covering the radial range(0-10)R_200c^3D. We choose narrower binning for the dark matter profiles than we did for the galaxies because the number of dark matter particles is much larger than the number of galaxies. We compute a single mean radial velocity profile and smooth it as we did in Sect. <ref>. We identify the minimum radial velocity of the average profile,v_min^dm, and its cluster-centric location,R_v_min^dm. As in <ref>, the boundaries of the infalling are the cluster-centric distances where the average velocity is0.75v_min^dm. The average radial velocity profiles of dark matter and galaxies agree at every redshift. The radial location ofv_min^dm,R_v_min^dm, is on average∼0.04%(∼7.9%at most) smaller thanR_v_minbased on the galaxies. The infall velocity based on the dark matter,v_inf^dm, is on average∼3.1%(∼6.5%at most) less thanv_infdetermined from the galaxies. We compute the mass of the infalling shell,M_shell^3D,dm, and the shell infall time,t_inf^dm, from the dark matter field following Sects. <ref> and <ref> but based only on the mass profile of the dark matter component. To computeM_shell^3D,dmandt_inf^dmwe multiply the dark matter profile by(1+Ω_b0/Ω_m0)to account for the baryonic fraction. The resulting MAR^3D,dmare then directly comparable with the MAR^3Dcomputed based on the total mass profile. Figure <ref> compares the dark matter and galaxy MAR's. Blue squares and orange points in the upper panel show the median true MAR's derived from galaxies and dark matter, respectively. The corresponding coloured error bars show the interquartile ranges of the MAR's. Points in the lower panel show the median ratio between the dark matter and galaxies 3D MAR's. The dash-dotted line show the global median. On average, the dark matter MAR^3D,dm's are∼6.5%below the MAR's based on the galaxies. The scatter between the two MAR's is<30%, generally less than the uncertainty in the determination of the respective MAR's. Thus galaxies are indeed biased tracers, but the bias is small on these scales. The MAR derived from galaxies should exceed the dark matter MAR because the clustering amplitude of galaxies relative to dark matter is larger at smaller scales and at higher redshift <cit.>. On the scale of the accretion region of galaxy clusters with redshiftz ≲1, the galaxy clustering excess is, however, small <cit.>. Thus the bias between the galaxy and dark matter MAR's derived from IllustrisTNG is also small. §.§ Future prospects MAR's of large samples of real and simulated clusters make the MAR's a probe of cluster astrophysics <cit.> and cosmology <cit.>. The caustic technique <cit.> provides unbiased estimation of the true MAR in the wide redshift range0.01-1.04. Measurement of the MAR of real clusters are currently limited toz≲0.4<cit.>. Next generation wide-field spectroscopic surveys will observe the infall regions around large numbers of galaxy clusters with high sampling rates. These dense and deep spectroscopic surveys will provide the necessary observational baseline for measuring the MAR's of thousands of clusters extending to higher redshifts. The multi-object William Herschel Telescope Enhanced Area Velocity Explorer spectrograph on WHT <cit.> will explore the infall regions of galaxy clusters and their connections to the cosmic web. The Weave Wide Field Cluster survey <cit.> will measure thousands of galaxy spectra in and around 20 clusters with0.04<z<0.07out to radii≲5 R_200c. A deeper cluster survey will provide dense spectroscopic surveys of 100 clusters for redshift≲0.5, a new baseline for measuring the MAR in this redshift range. Planned observations with the Prime Focus Spectrograph on Subaru <cit.> and the Maunakea Spectroscopic Explorer on CFHT <cit.> will provide spectroscopic redshifts of hundreds to thousands of galaxy cluster members for thousands of individual clusters withz≲0.6. The caustic technique <cit.> and weak gravitational lensing <cit.> will provide two independent measurements of cluster mass profiles extending to large radii. Neither the caustic technique nor weak lensing relies on the assumption of dynamical equilibrium. These techniques can thus be applied throughout the accretion region where dynamical equilibrium does not hold <cit.>. Present weak-lensing maps from HST and Subaru already provide cluster mass profiles up to5.7Mpc for∼20systems <cit.>. Future facilities will extend these measurements to thousands of clusters. The VRO <cit.> and the Euclid mission <cit.> will provide extended weak-lensing mass profiles for combination with extensive spectroscopy samples of clusters withz≲2. The Illustris TNG300-1 MAR's are based on < 100clusters atz> 0.52. The mass distribution of TNG300-1 mostly samples clusters withM_200c^3D∼(1.2-2)·10^14M_⊙. High redshift massive clusters have larger MAR's and place tight constraints on models of structure formation and evolution <cit.>. Larger volume hydrodynamical simulations, including MillenniumTNG <cit.> with its 740 Mpc comoving size, will provide larger samples of the most massive systems up to higher redshift. The next generation of simulations should enable tracing of the MAR to redshifts≳1. Extension of MAR determination to early epochs in cluster history will provide new insights into the astrophysics of cluster formation and evolution. § CONCLUSION We use the Illustris TNG300-1 simulation <cit.> to compute the MAR of clusters of galaxies. The recipe, based on the dynamics of cluster galaxies, computes the MAR as the ratio between the mass inside a spherical shell within the cluster infall region and the time for the shell to reach the cluster core. The method builds on the approach by <cit.> and <cit.> and incorporates the caustic technique <cit.> that provides robust, unbiased estimates of the true MAR's. A major goal of the approach is direct application to cluster observations. We use 1318 clusters extracted from TNG300-1 <cit.>. This sample includes both the 3D and caustic mass profiles of each cluster. The clusters have median massM_200c^3D∼(1.3-1.6)·10^14M_⊙and cover the redshift range0.01-1.04. We locate the infalling shell based on the average radial motion of cluster galaxies as a function of cluster-centric distance and redshift. We compute the infall time by solving the equation for radial infall of the infalling shell toR_200cwith nonconstant acceleration derived from the true cluster gravitational potential. The MAR's increase with increasing cluster mass and redshift. At fixed redshift, a change of∼1order of magnitude inM_200cyields a comparable increase in the MAR. This dependence tracks the increase of the mass of the infalling shell as a function ofM_200c. At fixed mass, the MAR increases by a factor of∼5fromz=0.01toz=1.04because of the anticorrelation of the infall time with redshift. The correlations between the MAR and cluster mass and redshift are predicted by hierarchical structure formation scenarios. The MAR's from IllustrisTNG build on similar approaches based on N-body simulations <cit.>. In Illustris TNG300-1 we can test the dark matter MAR's against the galaxy MAR's for the identical set of simulated systems. The dark matter MAR's are∼6.5%lower than the galaxy MAR's reflecting the relative amplitudes of the clustering of galaxies and dark matter as a function of scale and redshift. The IllustrisTNG MAR's complement approaches based on merger trees which cannot be linked as directly to the observations <cit.>. The IllustrisTNG MAR's lie within2σof the merger tree results. On average, the IllustrisTNG MAR's exceed the merger tree MAR's by∼50-70%; the difference increases with redshift. At fixed redshift, the dependence of the merger tree and Illustris MAR's on cluster mass agree well. The IllustrisTNG MAR's are remarkably consistent with available observations of the MAR as a function of redshift <cit.>. IllustrisTNG enables the exploration of the dynamics of accretion by galaxy clusters with simulated galaxies. The approach provides a framework for obtaining robust observed MAR's based on large spectroscopic samples with≳200cluster members. Future spectroscopic surveys with multi-object spectrographs like WEAVE <cit.>, PFS <cit.>, and eventually MSE <cit.> will provide large, deep, and dense spectroscopic cluster surveys allowing determination of the MAR up toz∼0.6. Facilities like the Vera Rubin Observatory <cit.> and Euclid <cit.> will provide extended weak-lensing mass profiles for thousands of clusters up toz∼2that will extend and complement the spectroscopy. The next generation of large volume hydrodynamical simulations including MillenniumTNG will guide the interpretation of observations of the MAR at higher redshift. The larger simulation volume will enable more robust exploration of the most massive clusters. The combined large datasets and enhanced simulations will provide powerful tests of the models of formation and evolution of cosmic structures based on the determination of the MAR. We thank Jubee Sohn for insightful discussions. M.P. and I.D. acknowledge the support of the Canada Research Chair Program and the Natural Sciences and Engineering Research Council of Canada (NSERC, funding reference number RGPIN-2018-05425). The Smithsonian Institution supports the research of M.J.G. and S.J.K. A.D. acknowledges partial support from the grant InDark of the Italian National Institute of Nuclear Physics (INFN). Part of the analysis was performed with the computer resources of INFN in Torino and of the University of Torino. This research has made use of NASA's Astrophysics Data System Bibliographic Services. All of the primary TNG simulations have been run on the Cray XC40 Hazel Hen supercomputer at the High Performance Computing Center Stuttgart (HLRS) in Germany. They have been made possible by the Gauss Centre for Supercomputing (GCS) large-scale project proposals GCS-ILLU and GCS-DWAR. GCS is the alliance of the three national supercomputing centres HLRS (Universitaet Stuttgart), JSC (Forschungszentrum Julich), and LRZ (Bayerische Akademie der Wissenschaften), funded by the German Federal Ministry of Education and Research (BMBF) and the German State Ministries for Research of Baden-Wuerttemberg (MWK), Bayern (StMWFK) and Nordrhein-Westfalen (MIWF). Further simulations were run on the Hydra and Draco supercomputers at the Max Planck Computing and Data Facility (MPCDF, formerly known as RZG) in Garching near Munich, in addition to the Magny system at HITS in Heidelberg. Additional computations were carried out on the Odyssey2 system supported by the FAS Division of Science, Research Computing Group at Harvard University, and the Stampede supercomputer at the Texas Advanced Computing Center through the XSEDE project AST140063. aa § THE REDSHIFT DEPENDENCE OF M_ SHELL The fits in Fig. <ref> (Sect. <ref>) indicate that M_ shell^3D increases by ∼ 20-60% from z=0.01 to z=1.04 depending on the cluster mass. Because of the large scatter, any correlation with redshift is statistically insignificant. Here we outline the reasons for this minimal dependence of M_ shell on redshift. This result reflects an interplay between the large-scale cluster density profiles and the slightly different distributions of cluster masses sampled by the simulations as a function of redshift. The upper panel of Fig. <ref> shows the true median density profiles of clusters (ρ̂^3D) relative to z = 0.01. The profiles are scaled to r/R_200c^3D in each redshift bin as noted in the legend. For each scaled profile the bold region indicates the range of radii of the infalling shell (fifth column of Table <ref>). Clusters at higher redshift are denser than their lower redshift counterparts as expected. In the inner equilibrium region, ≲ R_200c^3D (Sect. <ref>, Fig. <ref>), the density ratios are roughly constant from one redshift bin to another. For r ≳ R_200c, the ratios reach a minimum and then increase. At larger cluster-centric distances, the ratios reach the ratios of the average cosmological mass density at the relevant epochs. Subtracting the cosmological mean density from the profiles does not change the qualitative behavior of the relative profiles. The cluster density dominates the total mass density for r ≲ (6-7)R_200c, a radius larger than the characteristic turnaround radius (Sect. <ref>). The infalling shells are outside the virialized region (thick section in each curve of Fig. <ref>). Because the cluster-centric radius of the radial velocity minimum decreases as redshift increases (see Sect. <ref> and Table <ref>), the infalling shells are at different radii. The infalling shell volume decreases for shells nearer to the cluster center. Furthermore the shell thickness is also not constant. Taken together these effects produce an increase of a factor of ∼ 1.9 in M_ shell over the redshift range we probe. The slightly different mass distributions that characterize the cluster samples as a function of redshift (Table <ref>) also affect the redshift dependence of M_ shell. The bottom panel of Fig. <ref> shows the ratios of theNO median cumulative mass profiles. For the highest redshift clusters, the median mass profiles are ∼ 20% below the median profile at z=0.01 decreasing the correlation between M_ shell and redshift by a comparable fraction. This effect couples with the effects of the relative density profiles and produces a combined factor of ∼ 1.49 increase in M_ shell over the range sampled by IllustrisTNG. In other words, the cluster mass distribution and the relative profiles as a function of redshift account for the lack of dependence of M_ shell on redshift in the sample of clusters simulated with IllustrisTNG.
http://arxiv.org/abs/2307.04988v3
20230711025810
Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment Effect Estimation
[ "Chris Chinenye Emezue", "Alexandre Drouin", "Tristan Deleu", "Stefan Bauer", "Yoshua Bengio" ]
cs.LG
[ "cs.LG", "stat.ME" ]
[ Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment Effect Estimation Chris Chinenye Emezue †ce,mila Alexandre Drouinmila,ad Tristan Deleumila,udem Stefan Bauerce,helm Yoshua Bengiomila,udem,cifar,cifar2 ceTechnical University of Munich, Munich, Germany adServiceNow Research, Montreal, Canada udemUniversité de Montréal, Montreal, Canada milaMila - Quebec AI Institute, Montreal, Canada cifarCIFAR AI Chair cifar2CIFAR Senior Fellow helmHelmholtz AI Chris Chinenye [email protected] gflownets, treatment effect, causal discovery, dag-gflownet, causal inference 0.3in ] †Work done as a visiting research student at Mila. The practical utility of causality in decision-making is widespread and brought about by the intertwining of causal discovery and causal inference. Nevertheless, a notable gap exists in the evaluation of causal discovery methods, where insufficient emphasis is placed on downstream inference. To address this gap, we evaluate seven established baseline causal discovery methods including a newly proposed method based on GFlowNets, on the downstream task of treatment effect estimation. Through the implementation of a distribution-level evaluation, we offer valuable and unique insights into the efficacy of these causal discovery methods for treatment effect estimation, considering both synthetic and real-world scenarios, as well as low-data scenarios. The results of our study demonstrate that some of the algorithms studied are able to effectively capture a wide range of useful and diverse ATE modes, while some tend to learn many low-probability modes which impacts the (unrelaxed) recall and precision. § INTRODUCTION Causal inference has a wide variety of real-world applications in domains such as healthcare <cit.> , marketing <cit.>, political science, and online advertising <cit.>. Treatment effect estimation, the process of estimating the effect or impact of a treatment on an outcome in the presence of other covariates as potential confounders (and mediators), is a fundamental problem in causal inference that has received widespread interest for decades <cit.>. The existing powerful methods for treatment effect estimation from data require a complete (or partial) a priori knowledge of the causal graph <cit.>. When the graph is unknown, this requires solving a problem of causal structure learning, also known as causal discovery. Structure learning involves learning a graph (typically characterized by a directed acyclic graph or DAG for short) that best describes the dependence structure in a given data set <cit.>. In this approach, structure learning is required to learn a causal graph, which can then be applied to infer the influence of treatments on the outcomes of interest <cit.>. It should be noted that the actual causal graph can only be inferred up to its Markov Equivalence class (MEC), and the available observational data does not offer any means of further differentiation <cit.>. Learning a single graph has been shown to lead to poor predictions in a downstream causal inference task <cit.>. Instead of learning a single causal graph, the problem of structure learning can be tackled from a Bayesian perspective where we learn a posterior over the causal graphs. This has the unique advantage of accounting for epistemic uncertainty over the causal graphs in the MEC, thereby leading to a more enriching predictive performance in a downstream causal inference task. However, learning such a posterior over the causal graphs is plagued by challenges. One major issue is the combinatorially large sample space of causal graphs. The second major challenge is related to MCMC mode-mixing <cit.>: the mode-mixing problem occurs when the chances of going from one mode to a neighboring one may become exponentially small and require exponentially long chains, if the modes are separated by a long sequence of low-probability configurations. Therefore by using MCMC, there is an important set of distributions for which finite chains are unlikely to provide enough diversity of the modes of the distribution <cit.>. While there are a number of existing causal discovery methods (both Bayesian and non-Bayesian), our benchmark study centers on DAG-GFlowNet <cit.>, which is a unique method that leverages a novel class of probabilistic models called Generative Flow Networks  <cit.> to approximate the posterior distribution over causal graphs. Although causal inference is an inherent downstream application of causal discovery, most causal discovery evaluation methods are not aligned with causal inference because these two fields are typically studied independently <cit.>. For example, many causal discovery evaluation methods use the structural hamming distance (SHD) which compares the learned causal DAG (or the samples from the posterior distribution of DAGs in Bayesian structure learning) to the true DAG of the data generating process. Measuring the proximity of the learned DAGs, however, does not reveal much about their actual performance in treatment effect estimation given a treatment and outcome variable of interest, which is a predominantly downstream evaluation. In this work, we set out to benchmark causal discovery methods for the downstream task of treatment effect estimation, specifically the average treatment effect. As an extension to the DAG-GFlowNet, we offer insights on the application of GFlowNets to average treatment effect estimation, by comparing it with six other baseline methods for causal discovery. § BACKGROUND We provide a detailed background, in <ref>, on some of the key concepts used in this paper: Bayesian network, interventional distribution, Bayesian causal discovery, average treatment effect and our structure learning baselines. The structure learning baselines employed in our study follow <cit.>. In addition to DAG-GFlowNet <cit.>, we leveraged six baseline causal discovery algorithms: PC <cit.>, GES <cit.>, MC3 <cit.>, BCDNets <cit.>, Gadget <cit.>, and DiBS <cit.>. Due to space restrictions, we move our explanation of the causal discovery methods to Section <ref> in the Appendix. § EXPERIMENTAL SETUP Figure <ref> provides an illustrative overview of our experimental pipeline. The initial step involves Bayesian causal discovery, where, as discussed in Section <ref>, the objective is to learn a posterior distribution of the directed acyclic graphs (DAGs) that provide the most plausible explanations for the training dataset. The subsequent stage involves the estimation of the average treatment effect (ATE). Here, the ATE for each DAG in the posterior is estimated for every pair of distinct variables. In addition, the DAGs within the Markov equivalence class (MEC) of the true graph are enumerated and used to calculate the ATE estimates for each of them. The evaluation process, in stage 3, then involves a comparison of the average treatment effect (ATE) distributions between the true graph Markov equivalence class (MEC) and the learned posterior distribution of DAGs. For our experiments on synthetic data, we worked with 6 baselines in total and 26 seeds for each baseline. Each seed corresponds to a causal discovery experiment with a randomly sampled truth graph and observational data. §.§ Causal discovery experiments Following <cit.>, we performed causal discovery experiments on synthetic and real-world scenarios. For PC and GES we implement bootstrapping to achieve DAG posterior samples. Analysis on synthetic data: Following <cit.>, we performed experimental analyses using synthetic graphs and simulated data. We sampled synthetic data from linear Gaussian Bayesian networks with randomly generated structures. We experimented with Bayesian networks of size d=20 variables and considered two different sample sizes of n=20 and n=100. A small sample size of 20 was specifically chosen to evaluate the capabilities of the causal discovery algorithms in a low-data regime. The ground-truth graphs are sampled according to an Erdos-Rényi model. Analysis on flow cytometry data: DAG-GFlowNet was evaluated against the baselines on real-world flow cytometry data <cit.> to learn protein signaling pathways. The data consists of continuous measurements of d = 11 phosphoproteins in individual T-cells. They used the first n = 853 observations and the DAG, inferred by <cit.> and containing 11 nodes and 17 edges, as the dataset and ground-truth graph respectively for their causal discovery experiments. We continued with this direction in our experimental analysis and our goal was to show the downstream performance of DAG-GFlowNet on average treatment effect of the phosphoproteins in the protein signaling pathways. §.§ ATE experiments For our ATE experiments, we utilized all pairs of distinct variables: the rationale behind this was to thoroughly explore the possible treatment effects across various combinations. Therefore given d random variables {X_1,...,X_d}, we performed ATE evaluations on d^2 - d variable pairs. To achieve this in practice, we leveraged the DoWhy package <cit.>, which facilitated the implementation of the do-calculus algorithm. To ensure consistency and clarity in our results, we set the treatment values at 1.0 and 0.0 for all our experiments. The choice of values 1.0 and 0.0 does not relate to the existence or absence of a treatment, as is commonly used in most causal inference literature. Performing such a robust experiment involved a huge computation load. For example, for our baselines, each with 26 random seeds, each consisting of 1000 DAG samples from the posterior, we had to do d*(d-1) * 1000 * 26 * 6 ATE estimations. For the synthetic graph with 20 nodes, this leads to 57M estimations. In order to optimize the computational efficiency of our experiments, we implemented parallelism techniques. The GNU parallel computing tool <cit.> enabled us to distribute the computational workload across multiple processors or cores, thereby significantly reducing the overall computation time. §.§ Evaluation framework Our evaluation methodology goes beyond single-point ATE estimation, which is employed in standard causal inference benchmarking, by performing ATE evaluations based on posterior samples. This approach aims to provide a more comprehensive assessment of the quality of the learned posterior average treatment effect (ATE). Specifically, our evaluation pipeline involves the following metrics: Wasserstein distance (WD): To obtain a quantitative measure of the similarity between the true ATE sample-based distrbution and that of the learned ATE, we calculate and report their Wasserstein distance <cit.> using their samples[We utilize the Python implementation available https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.htmlhere.]. Precision and Recall: We compute the precision and recall of the modes present in the learned ATE distribution and compare them to the modes in the true ATE distribution. In order to calculate the precision and recall, we first identify the unique modes for each of the true, A_T and learned A_' ATE samples. Then based on these set of modes, we calculate the true positive (modes from A_T that are found in A_') , false negative (modes from A_T that are missed in A_'), and false positive (modes from A_' that are not in A_T). Note that the lists A_T and A_' have been regrouped prior to running the evaluation (see Section <ref>). §.§ Additional settings Enumerating the MEC of the true graph: In order to achieve our evaluation using our strategy (see Section <ref>), it is necessary to not work with just one true graph. For a given ground-truth graph, we enumerate all the DAGs in its Markov equivalence class (MEC). Regrouping ATE values: The estimation of average treatment effects (ATE) through regression analysis is susceptible to generating estimates that may exhibit slight variations within numerical precision (e.g., 1.000000001 and 1). As our precision and recall metrics essentially perform `hard matches" on floating point values, it becomes crucial to consider the influence of numerical precision. In order to accomplish this objective, we group ATE values that are numerically close. More details are in <ref>. § RESULTS & DISCUSSION The results presented in Table <ref> illustrate the Wasserstein distance (WD), precision, and recall metrics of all baseline methods in terms of their learned ATE samples. Upon examining the Wasserstein distance, PC achieves the lowest Wasserstein distance, while GES attains the highest. When focusing on precision, we observe that apart from BCDNets, all the methods seem to be performing very poorly. However all the methods attain relatively high recall scores, with the highest achieved by GES and closely followed by DAG-GFlowNet. This high recall indicates the ability of the methods to capture diverse modes within their ATE distribution. The WD, precision, and recall for the synthetic data experiments with 100 samples are presented in Table <ref>. Given an increased number of observational samples compared to the previous table, it is anticipated that the task of causal discovery will be simpler. This is evidenced in the lower WD scores compared to Table <ref>. In a manner similar to the scenario involving 20 samples, it is observed that the methods, with the exception of BCDNets, exhibit a considerably low precision score, while concurrently displaying high recall values. Table <ref> presents the evaluation results of the analysis on flow cytometry using the Sachs dataset. Overall, all methods demonstrate comparable performance in terms of the Wasserstein distance: the range of the WD is 0.004, unlike in Table <ref> which is 0.072 or Table <ref> which is 0.144. When considering precision, BCDNets and PC outperform DAG-GFlowNet, which exhibits lower performance. Notably, DAG-GFlowNet achieves the highest recall, indicating its ability to learn samples from diverse modes within the true ATE distribution. §.§ Filtering Low-Probability Modes In all our evaluations (Tables <ref>, <ref>, <ref>), we witness a trend of DAG-GFlowNet and other methods exhibiting very low precision scores. In Figure <ref> we observe that DAG-GFlowNet (and other baselines like GES, DiBs) tends to learn new modes, but those modes have a very low probability in the estimated distribution. In our current evaluation framework however, we include all values in the list that have non-zero densities, which leads to unfair penalization of methods that exhibit multimodal diversity. Consequently, these methods receive disproportionately low precision values. However, when we apply a filtering approach that removes the low-probability modes before calculating the metrics, a more insightful narrative emerges for these methods, as shown in Figure <ref>. In particular, we notice a significant increase in precision for all the methods that initially exhibited very low precision values (in Tables <ref>, <ref>, and <ref>), when we apply a density relaxation tolerance of 0.05 (i.e for any list of ATEs, we only consider ATE values that have a mass of at least 0.05). This trend is consistent across all the experimental settings (100 samples, 20 samples, Sachs dataset). § CONCLUSION In conclusion, the practical importance of causality in decision-making is widely acknowledged, and the interplay between causal discovery and inference is evident. In order to bridge the gap in the evaluation of causal discovery methods, where limited attention is given to downstream inference tasks, we conducted a comprehensive evaluation that assessed seven established baseline causal discovery methods including a novel approach utilizing GFlowNets. By incorporating a Bayesian perspective in our evaluation, we offer a unique form of distribution-level insights, into their effectiveness for downstream treatment effect estimation. icml2023 § RELATED WORK Benchmarking methods: Benchmarks have played a crucial role in advancing entire research fields, for instance computer vision with the introduction of ImageNet <cit.>. When it comes to causal discovery, benchmarks usually come in the form of research surveys <cit.>, benchmark datasets <cit.>, learning environments <cit.>, and software packages or platforms <cit.>. However these methods only evaluate the closeness of the causal DAG, or the samples from the posterior distribution of DAGs in Bayesian structure learning, from various causal discovery methods to the ground-truth DAG. Measuring the proximity of the learned DAGs, however, does not reveal much about their actual performance in treatment effect estimation given a treatment and outcome variable of interest, which is a predominantly downstream evaluation. In causal inference, datasets <cit.>, frameworks <cit.>, and software packages <cit.> provide valuable tools for predicting the causal effects of treatments on outcomes. Causal inference plays a crucial role in decision-making and finds numerous practical applications in various domains such as healthcare, advertising, and decision-making processes. This implies that causal inference has a more downstream impact. In causal inference, the graph represents the structure of the joint distribution of variables, which is then leveraged to identify the causal estimand. Therefore, the evaluation of causal discovery methods on downstream causal inference tasks provides more practical insights into the effectiveness and practicality of causal methods within real-world scenarios. Typically, the fields of causal discovery and inference are approached separately, resulting in limited intertwined evaluation methods. This is the aspect that distinguishes our work. Similar approaches can be found in studies that jointly integrate causal discovery and inference in an end-to-end manner, such as the notable example of DECI <cit.>. However, our work differs in two key aspects: firstly, we employ the novel GFlowNets for causal inference, increasing our span and secondly, we specifically focus on linear noise structural equation models, whereas DECI addresses the problem of end-to-end causal inference in non-linear additive noise structural equation models (SEM). § BACKGROUND We offer a detailed background, in this section, on some of the key concepts used in this paper. Bayesian network: A (causal) Bayesian network <cit.> is a probabilistic model over d random variables {X_1,...,X_d}, whose joint probability distribution factorizes according to a DAG G (whose edges express causal dependencies) as: P(X_1,...,X_d) = ∏_k=1^d P(X_k | Pa_G(X_k)), where Pa_G(X) is the set of parents of the node X, i.e the nodes with an edge onto X in G, interpreted as the direct causes of X. Interventional distribution: Given a random variable X_k, a (hard) intervention on X_k, denoted by do(X_k = a), is obtained by replacing the conditional probability distribution (CPD) P(X_k | Pa_G(X_k)) with a Dirac distribution δ_X_k = a which forces X_k to take on the value of a. Note that intervening on a variable, in a graphical sense, results in a mutilated graph where all incoming edges to the node corresponding to that variable are removed <cit.>. §.§ (Bayesian) Causal discovery Given a dataset D {^(i)}_i = 1^n of n observations, such that ^(j)∼ P(X_1,...,X_d), the goal of structure learning is to learn the DAG G corresponding to the causal Bayesian network that best models D. It is important to note that D could be observational samples or interventional data samples (got from performing hard or soft interventions). In a Bayesian structure learning setting, the task is to approximate the posterior distribution P(G | D) over Bayesian networks that model these observations. A distribution over the DAGs allows quantifying the epistemic uncertainty and the degree of confidence in any given Bayesian network model, which is especially useful when the amount of data to learn from is small <cit.>. §.§ Average treatment effect (ATE) estimation The average treatment effect (ATE) is a quantity that allows us to estimate the impact of a treatment variable on an outcome variable. Given X_T and X_Y, our treatment and effect variables of interest respectively, the ATE on targets X_Y for treatment X_T = a given a reference X_T = b is given by <cit.>: ATE(a,b) = 𝔼[X_Y|do(X_T =b)] - 𝔼[X_Y|do(X_T = a)] In practice, this causal inference is broken down into two steps: identification and estimation. Identification deals with converting the causal estimand P(X_Y|do(X_T =b) into a statistical estimand that can be estimated using the dataset D. Some identification methods include the back-door criterion, front-door criterion <cit.>, instrumental variables <cit.> and mediation. Causal estimation then computes the identified statistical estimand from the data set using a range of statistical methods. The do-calculus algorithm <cit.> provides a powerful, systematic, programmable framework for the identification and estimation of the causal estimand. §.§ Causal discovery baseline algorithms In Table <ref> we briefly describe the structure learning algorithms we use in this work. The structure learning baselines employed in our study follow those utilized by <cit.>. For PC and GES we implement bootstrapping to achieve DAG posterior samples. DAG-GFlowNet: DAG-GFlowNet <cit.> employs GFlowNets <cit.> as a substitute for MCMC in order to estimate the posterior distribution of Bayesian network structures, based on a set of observed data. An overview of GFlowNets is presented in Section <ref> of the Appendix. The process of creating a sample DAG from an approximate distribution is considered a sequential decision task. This involves constructing the graph incrementally, one edge at a time, by utilizing transition probabilities that have been learned by a GFlowNet. We refer the reader to <cit.> for a comprehensive study of DAG-GFlowNet. DiBS: The DiBS framework <cit.> is an approach to Bayesian structure learning that is fully differentiable. It operates within the continuous space of a latent probabilistic graph representation. In contrast to prior research, the DiBS method does not rely on a specific format for the local conditional distributions. Additionally, it enables the simultaneous estimation of the graph structure and the parameters of the conditional distributions. MC3: In the MC3 algorithm (also known as structured MCMC) <cit.>, the authors present a hierarchical Bayesian approach to structure learning that leverages a prior over the classes of variables using nonparametric block-structured priors over Bayes net graph structures. This approach relies heavily on the assumption that variables come in one or more classes and that the prior probability of an edge existing between two variables is a function only of their classes <cit.>. GES: The Greedy Equivalence Search (GES) algorithm <cit.> is a score-based method for causal discovery that has been in use for a considerable amount of time. It operates by performing a greedy search across the set of equivalence classes of DAGs. The representation of each search state is accomplished through a completed partially directed acyclic graph (CPDAG), which includes operators for the insertion and deletion of edges. These operators enable the addition or removal of a single edge, respectively <cit.>. PC: The Peter-Clark (PC) algorithm <cit.> is a prominent constraint-based method for causal discovery. It leverages conditional independence (CI) tests to infer the underlying causal structure. The algorithm yields a completed partially directed acyclic graph (CPDAG) that represents the relationships between variables. It follows a three-step process: 1) identifying the skeleton of the graph, 2) determining v-structures or colliders (X ⟶ Y ⟵ Z) based on d-separation, and 3) propagating edge orientations. Initially, the algorithm creates a fully connected undirected graph using all variables in the dataset. It then eliminates edges that are unconditionally or conditionally independent (skeleton detection), identifies and orients v-structures using the d-separation set, and finally orients the remaining edges while ensuring the absence of new v-structures and cycles. The PC algorithm relies on the assumptions of acyclicity, causal faithfulness, and causal sufficiency. BCDNets: BCDNets <cit.> is another variational inference framework like DiBS. In their work they focus on estimating a distribution over DAGs characterizing a linear-Gaussian SEM and propose techniques to scale to high dimensions, such as using deep neural networks to model a variational family of factorized posterior distributions over the SEM parameters (including the edge weights and noise variance), and a horseshoe prior <cit.> on the edge weights, which promotes sparsity. Gadget: Gadget <cit.> is based on MCMC: sampling DAGs by simulating a Markov chain whose stationary distribution is the posterior distribution. However, to enhance the mixing of the chain, and reduce the space and time requirements, they build a Markov chain on the smaller space of ordered partitions of the node set, each state being associated with multiple DAGs. § GENERATIVE FLOW NETWORKS (GFLOWNETS) The Generative Flow Networks <cit.>, also known as GFlowNets, are a type of inference models that have a broad range of applications. GFlowNets are capable of generating samples with a probability that is proportional to a given reward function. The GFlowNets have been extensively studied and discussed in research papers such as <cit.> and <cit.>. The models facilitate the process of selecting a varied pool of potential candidates, while adhering to a training objective that ensures a nearly proportional sampling based on a specified reward function. GFlowNets are characterized unique training objectives like the flow-matching condition <cit.>, the detailed balance condition <cit.>, etc, through which a policy is learned. Through the training objectives, this policy is designed to ensure that the probability P_T(s) of sampling an object s is roughly proportional to the value R(s) of a specified reward function applied to that object. The GFlowNets technique is designed to reduce the computational burden of MCMC methods by performing the necessary work in a single generative pass that has been trained for this purpose. GFlowNets are well-suited for modeling and sampling from distributions over sets and graphs, as well as estimating free energies and marginal distributions <cit.>. They excel in problem scenarios with specific characteristics <cit.>: (1) the ability to define or learn a non-negative or non-marginalized reward function that determines the distribution to sample from, (2) the presence of a highly multi modal reward function, showcasing GFlowNets' strength in generating diverse samples, and (3) the benefit of sequential sampling, where compositional structure can be leveraged for sequential generation. Since its inception, GFlowNets have exhibited promising results in diverse domains such as discrete probabilistic modeling <cit.>, molecular design <cit.>, and causal discovery <cit.>. The aim of our research is to provide significant findings on the feasibility of employing GFlowNets for causal inference. § REGROUPING ATE VALUES The estimation of average treatment effects (ATE) through regression analysis is susceptible to generating estimates that may exhibit slight variations within numerical precision (e.g., 1.000000001 and 1). As our precision and recall metrics essentially perform `hard matches" on floating point values, it becomes crucial to consider the influence of numerical precision. In order to accomplish this objective, we group ATE values that are numerically close. We use the following equation to test whether two floating point values, a and b, are equivalent: |a - b| <= (atol + rtol * |b|), where rtol is the relative tolerance parameter and atol is the absolute tolerance parameter. Practically, we use the `isclose' function from the Numpy package[<https://numpy.org/doc/stable/reference/generated/numpy.isclose.html>] which uses the equation above and returns a boolean indicating whether a and b are equal within the given tolerance. We used the default values from Numpy, rtol=1e-05, atol=1e-08. We apply regrouping to the list of ATEs for precision and recall evaluation, but not for Wasserstein distance.
http://arxiv.org/abs/2307.05375v1
20230709095034
Emotion Analysis on EEG Signal Using Machine Learning and Neural Network
[ "S. M. Masrur Ahmed", "Eshaan Tanzim Sabur" ]
eess.SP
[ "eess.SP", "cs.AI", "cs.HC" ]
Emotion Analysis on EEG Signal Using Machine Learning and Neural Network S. M. Masrur Ahmed Software Engineer bKash Limited Dhaka, Bangladesh [email protected] Eshaan Tanzim Sabur Department of Computer Science BRAC University Dhaka, Bangladesh [email protected] August 12, 2023 ============================================================================================================================================================================================================================================== Emotion has a significant influence on how one thinks and interacts with others. It serves as a link between how a person feels and the actions one takes, or it could be said that it influences one's life decisions on occasion. Since the patterns of emotions and their reflections vary from person to person, their inquiry must be based on approaches that are effective over a wide range of population regions. To extract features and enhance accuracy, emotion recognition using brain waves or EEG signals requires the implementation of efficient signal processing techniques. Various approaches to human-machine interaction technologies have been ongoing for a long time, and in recent years, researchers have had great success in automatically understanding emotion using brain signals. In our research, several emotional states were classified and tested on EEG signals collected from a well-known publicly available dataset, the DEAP Dataset, using SVM (Support Vector Machine), KNN (K-Nearest Neighbor), and an advanced neural network model, RNN (Recurrent Neural Network), trained with LSTM (Long Short Term Memory). The main purpose of this study is to improve ways to improve emotion recognition performance using brain signals. Emotions, on the other hand, can change with time. As a result, the changes in emotion over time are also examined in our research. emotion recognition, EEG signal, DEAP dataset, fft, Machine Learning, SVM, KNN, DEAP, RNN, LSTM § INTRODUCTION Emotion is defined as a person's conscious or unconscious behavior that indicates our response to a situation. Emotion is interconnected with a person's personality, mood, thoughts, motivation, and a variety of other aspects. Fear, happiness, wrath, pride, anger, panic, despair, grief, joy, tenseness, surprise, confidence, enthusiasm are the common emotions are all experienced by humans <cit.>. The experience can be both positive or negative. In the light of this, physiological indications such as heart rate, blood pressure, respiration signals, and Electroencephalogram (EEG) signals might be useful in properly recognizing emotions.Emotion recognition has always been a major necessity for humanity, not just for usage in fields like computer science, artificial intelligence, and life science, but also for assisting those who require emotional support. For a long time, experts couldn't figure out a reliable way to identify true human emotion. One method was to use words, facial expression, behavior, and image to recognize one's emotions <cit.>. Researchers found that subject answers are unreliable for gauging emotion; people are unable to reliably express the strength and impact of their feelings. Furthermore, it is simple to manipulate self-declared emotions, resulting in incorrect findings. As a result, researchers had to shift their focus to approaches that do not rely on subject reactions. The development of Brain-Computer Interface (BCI) and Electroencephalogram (EEG) signals demonstrated more accurate methods for detecting human emotions. It introduced an involuntary approach to get more accurate and reliable results. Involuntary signals are uncontrollable and detect people's true feelings. They have the ability to express genuine emotions. The advancement of a reliable human emotion recognition system using EEG signals could help people regulate their emotions and open up new possibilities in fields like education, entertainment, and security and might aid people suffering from Alexithymia or any other psychiatric disease. The goal of our paper is to use effective techniques on DEAP dataset to extract features from EEG signals using band waves and apply machine learning algorithms and neural network models to check the efficiency of the used algorithms on valence-arousal, EEG regions and band waves. § LITERATURE REVIEW The EEG research community is expanding its reach into a number of different fields. In her research, Vanitha V. et al. <cit.> aims to connect stress and EEG, and how stress can have both beneficial and bad effects on a person's decision-making process. She also discusses how stress affects one's interpersonal, intrapersonal, and academic performance and argues that stress can cause insomnia, lowered immunity, migraines, and other physical problems. Jin et al. <cit.> while analyzing emotions reported promising results, claiming that combining FFT, PCA, and SVM yielded results that were about 90 percent accurate. As a result, rather than the complexity of the classification algorithm used, the feature extraction stage determines the accuracy of any model. As a result, categorization systems can offer consistent accuracy and recall. Liu et al. <cit.> proposed a fractal-based algorithm to identify and visualize emotions in real time. They found that gamma band could be used to classify emotion. For emotion recognition, the authors analyzed different kinds of EEG features to find the trajectory of changes in emotion. They then proposed a simple method to track the changes in emotion with time. In this paper, the authors built a bimodal deep auto encoder and a single deep auto encoder to produce shared representations of audios and images. They also explored the possibility of recognizing emotion in physiological signals. Two different fusion strategies were used to combine eye movement and EEG data. The authors tested the framework for cross modal learning tasks. The authors introduce a novel approach that combines deep learning and physiological signals. The DEAP Dataset was also utilized by the following writers to analyze emotion states. Xing et al. <cit.> developed a stacked autoencoder (SAE) to breakdown EEG data and classify them using an LSTM model. - The observed valence accuracy rate was 81.1 percent, while the observed arousal accuracy rate was 74.38 percent. Chao et al. <cit.> investigated a deep learning architecture, reaching an arousal rate of 75.92 percent. and 76.83 percent for valence states. Mohammadi et al. <cit.> classified arousal and valence using Entropy and energy of each frequency band and reached an accuracy of 84.05 percent for arousal and 86.75 percent for valence. Xian et al. <cit.> utilized MCF with statistical, frequency, and nonlinear dynamic characteristics to predict valence and arousal with 83.78 percent and 80.72 percent accuracy, respectively. Ang et al. <cit.> developed a wavelet transform and time-frequency characteristics with ANN classification method. For joyful feeling, the classification rate was 81.8 percent for mean and 72.7 percent for standard deviation y. The performance of frequency domain characteristics for sad emotions was 72.7 percent. Alhagry et al. <cit.> developed a deep learning technique for identifying emotions from raw EEG data that used long-short term memory (LSTM) neural networks to learn features from EEG signals and then classified these characteristics as low/high arousal, valence, and liking. The DEAP data set was used to evaluate the -e technique. -The method's average accuracy was 85.45 percent for arousal and 85.65 percent for valence. § METHODOLOGY §.§ Data Materials For our research, we have chosen the DEAP <cit.> dataset. The DEAP dataset for emotion classification is freely available on the internet. A number of physiological signals found in the DEAP dataset can be utilized to determine emotions. It includes information on four main types of states: valence, arousal, dominance, and liking. Due to the use of various sample rates and different types of tests in data gathering, the DEAP Dataset is an amalgamation of many different data types. EEG data was gathered from 32 participants, comprising 16 males and 16 women, in 32 channels. The EEG signals were collected by playing 40 different music videos, each lasting 60 seconds, and recording the results. Following the viewing of each video, participants were asked to rate it on a scale of one to nine points. According to the total number of video ratings received, which was 1280, the number of videos (40) multiplied by the number of volunteers (40) yielded the result (i.e. 32). Following that, the signals from 512 Hz were downsampled to 128 Hz and denoised utilizing bandpass and lowpass frequency filters, as well as a lowpass frequency filter. 512 Hz EEG signals were acquired from the following 32 sensor positions (using the worldwide 10- 20 positioning system): Fp1, AF3, F3, F7, FC5, FC1, C3, T7, CP5, CP1, P3, P7, PO3, O1, Oz, Pz, Fp2, AF4, Fz, F4, F8, FC6, FC2, Cz, T8, CP2, P4, P8, PO4, and O2. It was also possible to take a frontal face video of each of the 22 participants. Several signals, including EEG, electromyograms, breathing region, plethysmographs, temperature, and so on, were gathered as 40 channel data during each subject's 40 trials, with each channel representing a different signal. EEG data is stored in 32 of the 40 available channels. The rest of the channels record EOG, EMG, ECG, GSR, RSP, TEMP and PLET data. §.§ Data Visualization We extracted valence and arousal ratings from the dataset. The combination of Valence and Arousal can be converted to emotional states: High Arousal Positive Valence (Excited, Happy), Low Arousal Positive Valence (Calm, Relaxed), High Arousal Negative Valence (Angry, Nervous) and Low Arousal Negative Valence (Sad, Bored). We have analyzed the changes in emotional state along with the number of trials for each group by following Russell’s circumplex model. Russell's circumplex model helped classify the DEAP dataset. Russell's methodology for visualizing the scale with the real numbers, the DEAP dataset employs self-assessment manikins (SAMs) <cit.>. 1–5 and 5–9 were chosen as the scales based on self-evaluation ratings <cit.>. The label was changed to “positive” if the rating was greater than or equal to 5, and to “negative” if it was less than 5. We utilized a different way to determine "positive" and "negative" values. The difference in valence and arousal was rated on a scale of 1 to 9 by the participants of DEAP. We believe that categorizing the dataset using a mean value is not a good approach because there may be no participants who rate between 1-2 and 4-6. As a result, using a mean value to derive the separation could lead to bias. On the other hand, all users may have given ratings ranging from 5 to 9. To avoid biased analysis, we wanted to utilize the value from the mid range to separate the positive and negative values. As a result, to distinguish between "positive" and "negative" numbers, we used median values. We looked for a positive or negative valence as well as a positive or negative arousal level in each experiment. Numbers greater than the median are considered "positive", while values less than the median are considered "negative". Four labels for our research have been created: high arousal low valence (HALV), low arousal high valence (LAHV), high arousal high valence (HAHV), and low arousal low valence (LALV). §.§ Channel Selection We used two types of studies for FFT analysis. For making an RNN model with LSTM with the help of FFT processing, Emotiv Epoch+ was fitted with a total of 14 channels, which were carefully selected. The number of channels is [1,2,3,4,6,11,13,17,19,20,21,25,29,31] .The number of bands is 6. band = [4,8,12,16,25,45] . We also discovered the relation between Time domain and Frequency domain with the help of FFT in another study. §.§ FFT Fourier Transform (FFT) is a mathematical procedure that computes the discrete Fourier transform (DFT) of a sequence. It is used to solve a variety of different types of equations or graphically depict a range of frequency activity. Fourier analysis is a signal processing technique used to convert digital signals (x) of length (N) from the timedomain to the frequency domain (X) and vice versa. FFT is a technique that is widely utilized when estimating the Power Spectral Density of an EEG signal. PSD is an abbreviation for Power spectral distribution at a specific frequency and can be computed directly on the signal using FFT or indirectly by altering the estimated autocorrelation sequence. §.§ RNN and LSTM RNNs have risen to prominence as computing power has improved, data volumes have exploded, and long short-term memory (LSTM) technology became available in the 1990s. RNNs may be incredibly precise in forecasting what will happen next because of their internal memory, which allows them to retain key input details. The reason they're so popular is because they're good at handling sequential data kinds like time series and voice. Recurrent neural networks have the advantage over other algorithms in that they can gain a deeper understanding of a sequence and its context. A short-term memory is common in RNNs. When linked with an LSTM, they have a long-term memory as well (more on that later). Due to the data sequence providing important information about what will happen next, an RNN may do jobs that other algorithms are unable to complete. <cit.> Long short-term memory networks (LSTMs) are a sort of recurrent neural network extension that expands memory effectively. As a result, it's well-suited to learning from big experiences separated by long periods of time. RNN extensions that increase memory capacity are known as long short-term memory (LSTM) networks. The layers of an RNN are built using LSTMs. RNNs can either assimilate new information, forget it, or give it enough importance to alter the result thanks to LSTMs, which assign “weights” to data. The layers of an RNN, which is sometimes referred to as an LSTM network, are built using the units of an LSTM. With the help of LSTMs, RNNs can remember inputs for a long time. Because LSTMs store data in a memory comparable to that of a computer, this is the case. The LSTM can read, write, and delete information from its memory. This memory can be thought of as a gated cell, with gated signifying that the cell decides whether to store or erase data (i.e., whether to open the gates) based on the value it assigns to the data. To allocate importance, weights are utilized, which the algorithm also learns. This basically means that it learns over time which data is critical and which is not. Long-Short-Term Memory Networks (LSTMs) are recurrent neural network subtypes (RNN). §.§ Feature Extraction Extracting features from EEG data can be done in a variety of methods. Periodogram and power spectral density calculations and combining band waves of various frequencies are required for feature extraction with the help of FFT. The Welch method is <cit.> a modified segmentation scheme for calculating the average periodogram. Generally the Welch method of the PSD can be described by the equations below, the power spectra density, P(f) equation is defined first. Then, for each interval, the Welch Power Spectrum, P_welch (f), is given as the mean average of the periodogram. P(f)=1/M U|∑_n=0^M-1 x_i(n) w(n) e^-j 2 π f|^2 P_welch (f)=1/L∑_i=0^L-1 P(f) The power spectral density (PSD) shows how a signal’s power is distributed in the frequency domain. Among the PSD estimators, Welch’s method and the multitaper approach have demonstrated the best results <cit.>. The input <cit.> signal x [n], n = 0,1,2,…,N-1 is divided into a number of overlapping segments. Let M be the length of each segment, using n=0,1, 2,…,M-1, M. x_i = x [i×M/2 + n] where n=0,…,M-1,i=0,1,2,…,N-1 Each segment is given a smooth window w(n). In most cases, we employ the Hamming window at a time. The Hamming window formula for each segment is as follows: w(n)=0.54-0.46cos[2nπ/M] Here, U=(1 / M) ∑_n=0^M-1 w^2(n) denotes the mean power of the window w(n). So, M U=∑_n=0^M-1 w^2(n) denotes the energy of the window function w(n) with length M. It is to be noted that, L denotes the number of data segment. For validation, ”Accuracy” is the most popular metric. However, a model’s performance cannot be judged based only by the accuracy. So, we have used other metrics, such as - precision, recall, and f-score. The metrics were calculated using the mean of metrics for all the folds through cross validation. § RESULTS In our research, we tried to come up with a relation among EEG channel, time domain and frequency domain using Welch’s Periodogram with the help of band wave and FFT. The band waves identify the following emotions. The following figure shows the time domain of the EEG signals. From the figure, we can see that there has been lots of electrical activities going on the EEG channels. And from the time domain, we can get the graphs of the frequency domain along with Power Spectral Density across the channels with the help of Fourier Transformation. In our study, we used Fast Fourier Transformation, the sine wave was taken from 4 Hz to 45 Hz. So, by comparing the sine wave with the time domain, we can get the PSD at the frequency domain. From the time-frequency domain, we can see the electrical activity in brain in multiple time intervals which shows a relation between different frequencies, brain activity and voltage. For the first FFT analysis, the research calculates mean, std, min, first quartile, median, third quartile and max values of 1240 trials of the six regions based on sensors and four band power values. For this research, we used SVM and K-NN classifiers. SVM classifier used “linear” kernel in this research. The research also calculates the accuracy of Valence and Arousal. We attempted to experience the variations in electrical activities in the brain over time in the first study. To extract EEG signals, the 32 sensor sites were separated into globally recognizable zones. The position of the electrode are frontal, central, temporal, parietal, and occipital placements, respectively. The topographical maps are used to visualize spatial distribution of activity. This useful visualization method allows us to examine how data changes over one time point to another. The subject in this study was watching a video while we analyzed the changes in electrical activity from 0.153 to 0.273 seconds. We can see the changes of electrical activity in voltage based various frequencies as band waves are determined by the range of frequency and different band waves indicate different ranges of emotion. From this, it can be said that the subject can feel different emotions in a particular time point. For the second research, during the FFT processing, we employed meta data for the purpose of doing a meta vector analysis. Raw data was split over a time span of 2 seconds, with each slice having a 0.125-second interval between it. A two-second FFT of channel was carried out in different frequencies in a sequence. Emotiv Epoch+ was fitted with a total of 14 channels, which were carefully selected. The number of channels is [1,2,3,4,6,11,13,17,19,20,21,25,29,31] .The number of bands is 6. band = [4,8,12,16,25,45] . A band power of 2 seconds on average is used. The window size was 256 with a step size of 16, with each update occurring once every 0.125 seconds. The sampling rate was set to 128 hertz. The FFT was then performed on all of the subjects using these settings in order to obtain the required output. Neural networks and other forms of artificial intelligence require a starting collection of data, referred to as a training dataset, that serves as a foundation for subsequent application and use. This dataset serves as the foundation for the program's developing information library. Before the model can interpret and learn from the training data, it must be appropriately labeled. The lowest value of the data is 200 and the greatest value is above 2000, which means that trying to plot it will result in a lot of irrelevant plots, which will make conducting the analysis tough. The objective of machine learning is to create a plot and then optimize it further in order to obtain a pattern. And if there are significant differences between the plotted points, it will be unable to optimize the data. As a result, in order to fix this issue, the values have been reduced to their bare minimum, commonly known as scaling. The values of the data will not be lost as a result of scaling; instead, the data will be optimized to the point where there is little difference between the plotted points. In order to achieve this, StandardScaler must transform your data into a distribution with a mean of zero and a standard deviation of one. When dealing with multivariate data, this is done feature-by-feature to ensure that the data is accurate (in other words independently for each column of the data). Because of the way the data is distributed, each value in the dataset will be deducted from the mean and then divided by the standard deviation of the dataset. After that, we divided the data set into two parts: a training data set and a testing data set. Training will be carried out on 75% of the data, and testing will be carried out on 25% of the data. A total of 456768 data were used in the training process. A total of 152256 data were used in the testing. RNN has been kept sequential. The first layer LSTM of sequential model takes input of 512. The second layer takes input of 256. The third and fourth layer takes an input of 128 and 64. And, the final layer LSTM of sequential model takes input of 10. Since we are conducting classification where we will need 0 or 1 that is why sigmoid has been used. The activation functions used are relu and for the last part sigmoid. The rectified linear activation function, abbreviated ReLU, is a piecewise linear function that, if the input is positive, outputs the value directly; otherwise, it outputs zero. Batch normalization was used. Batch normalization is a method for training extremely deep neural networks in which the inputs to a layer are standardized for each mini-batch. This results in a stabilization of the learning process and a significant drop in the total of training epochs required for training deep networks. Through randomly dropping out nodes while training, a single model can be utilized to simulate having a huge variety of distinct network designs.[2] This is referred to as dropout, and it is an extremely computationally efficient and amazingly successful regularization technique for reducing overfitting and improving generalization error in all types of deep neural networks. In our situation, dropout rates began at 30%, increased to 50%, then 30%, 30%, 30%, and eventually 20%. We worked with three-dimensional datasets; however, when we converted to a dense layer, we obtained a one-dimensional representation in order to make a prediction. RMSprop was used as the optimizer with a learning rate of 0.001, a rho value of 0.9, and an epsilon value of 1e-08. RMSprop calculates the gradient by dividing it by the root of the moving (discounted) average of the square of the gradients. This application of RMSprop makes use of conventional momentum rather than Nesterov momentum. Additionally, the centered version calculates the variance by calculating a moving average of the gradients. As we can see, accuracy increases very gradually in this case, and learning rate plays a major part. If we increased the learning rate, accuracy would also increase rapidly, and when optimization is reached, the process would reverse, with accuracy decreasing at a faster rate. That is why the rate of learning has been reduced. When one zero is removed, the accuracy decreases significantly. As our loss function, we utilized the Mean Squared Error. The Mean Squared Error (MSE) loss function is the most basic and extensively used loss function, and it is typically taught in introductory Machine Learning programs. To calculate the MSE, take the difference between your model's predictions and the ground truth, square it, and then average it across the whole dataset. The MSE can never be negative since we are constantly squaring the errors. To compute loss, we utilized mean squared error. Because of the squaring portion of the function, the MSE is excellent for guaranteeing that the trained model does not contain any outlier predictions with significant mistakes. Because of this, the MSE places greater emphasis on outlier predictions with large errors. We tried our best to reduce the percentage of value loss and increase the accuracy rate. We saved the model and kept track by every 50 epochs. In the first picture, we can see that for the first 50 epochs the training loss 0.1588 and validation loss reduced to 0.06851 and 0.06005. And the training accuracy rate increased from 9.61 percent to 45.784 percent and validation accuracy increased to 53.420 pecent. For the second 50 epochs, the training loss reduced to 0.06283 and the validation loss reduced to .05223 where the training accuracy increased to 51.661 percent and validation accuracy increased to 60.339 percent. For the third 50 epochs, the training loss reduced to 0.05992 and the validation loss reduced to .04787 where the training accuracy increased to 54.492 percent and validation accuracy increased to 64.413 percent. After 200 epochs the ratio started to change at a very slow rate.We ran 1000 epochs and got the training accuracy rate of 69.21% and the validation accuracy rate was 78.28%. § CONCLUSION To summarize, in this research, we describe the EEG-based emotion recognition challenge, as well as existing and proposed solutions to this problem. Emotion detection by the use of EEG waves is a relatively new and exciting area of study and analysis. To identify and evaluate on numerous emotional states using EEG signals acquired from the DEAP Dataset, SVM (Support Vector Machine), KNN (K-Nearest Neighbor). According to the findings, the suggested method is a very promising option for emotion recognition, owing to its remarkable ability to learn features from raw data in a short period of time. When compared to typical feature extraction approaches, it produces higher average accuracy over a larger number of people. 00 b1 S. D. Rama Chaudhary Ram Avtar Jaswal, “Emotion recognition based on eegusing deap dataset,”European Journal of Molecular amp; Clinical Medicine,vol. 8, no. 3, pp. 3509–3517, 2021,issn: 2515-8260. b2 X. Cheng, C. Pei Ying, and L. Zhao, “A study on emotional feature analysisand recognition in speech signal,”Measuring Technology and MechatronicsAutomation, International Conference on, vol. 1, pp. 418–420, Apr. 2009.doi:10.1109/ICMTMA.2009.89. b3 C. Huang, Y. Jin, Q. Wang, L. Zhao, and C. Zou, “Multimodal emotion recog-nition based on speech and ecg signals,” vol. 40, pp. 895–900, Sep. 2010.doi:10.3969/j.issn.1001-0505.2010.05.003. b4 Y. Wang, X. Yang, and J. Zou, “Research of emotion recognition based onspeech and facial expression,”TELKOMNIKA Indonesian Journal of Electri-cal Engineering, vol. 11, Jan. 2013.doi: 10.11591/telkomnika.v11i1.1873. b5 S. A. Hussain and A. S. A. A. Balushi, “A real time face emotion classificationand recognition using deep learning model,”Journal of Physics: ConferenceSeries, vol. 1432, p. 012 087, Jan. 2020.doi: 10 . 1088 / 1742 - 6596 / 1432 / 1 /012087. [Online]. Available: https : / / doi . org / 10 . 1088 / 1742 - 6596 / 1432 / 1 /012087. b6 V. Vanitha and P. Krishnan, “Real time stress detection system based on eegsignals,” vol. 2016, S271–S275, Jan. 2016. b7 J. Jin, X. Wang, and B. Wang, “Classification of direction perception eegbased on pca-svm,” inThird International Conference on Natural Computa-tion (ICNC 2007), vol. 2, 2007, pp. 116–120.doi: 10.1109/ICNC.2007.298. b8 W. Liu, W.-L. Zheng, and B.-L. Lu, “Emotion recognition using multimodaldeep learning,” vol. 9948, Oct. 2016,isbn: 978-3-319-46671-2.doi: 10.1007/978-3-319-46672-958. b9 X. Xing, Z. Li, T. Xu, L. Shu, B. Hu, and X. Xu, “Sae+lstm: A new framework for emotion recognition from multi-channel eeg,” Frontiers in Neurorobotics,vol. 13, p. 37,2019, issn: 1662-5218. doi: 10.3389/fnbot.2019.00037. [Online]. Available: https://www.frontiersin.org/article/10.3389/fnbot.2019.00037. b10 H. Chao, H. Zhi, D. Liang, and Y. Liu, “Recognition of emotions using multi-channel eeg data and dbn-gc-based ensemble deep learning framework,”Computational Intelligence and Neuroscience, vol. 2018, pp. 1–11, Dec. 2018.doi:10.1155/2018/9750904. b11 Z. Mohammadi, J. Frounchi, and M. Amiri, “Wavelet-based emotion recognition system using eeg signal,”Neural Computing and Applications, vol. 28,Aug. 2017.doi: 10.1007/s00521-015-2149-8. b12 X. Li, J.-Z. Yan, and J.-H. Chen, “Channel division based multiple classifiersfusion for emotion recognition using eeg signals,”ITM Web of Conferences,vol. 11, p. 07 006, Jan. 2017.doi: 10.1051/itmconf/20171107006. b13 A. Ang and Y. Yeong, “Emotion classification from eeg signals using time-frequency-dwt features and ann,”Journal of Computer and Communications,vol. 05, pp. 75–79, Jan. 2017.doi: 10.4236/jcc.2017.53009. b14 S. Alhagry, A. Aly, and R. El-Khoribi, “Emotion recognition based on eeg using lstm recurrent neural network,”International Journal of Advanced Computer Science and Applications, vol. 8, Oct. 2017.doi: 10 . 14569 / IJACSA .2017.081046. b15 "DEAP: A Database for Emotion Analysis using Physiological Signals (PDF)", S. Koelstra, C. Muehl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, I. Patras, EEE Transactions on Affective Computing, vol. 3, no. 1, pp. 18-31, 2012. b16 J. D. Morris, “Observations: Sam: The self-assessment manikin an efficient cross-cultural measurement of emotional response 1,”Journal of Advertising Research, 1995. b17 D. Wang and Y. Shang, “Modeling physiological data with deep belief net-works,”International journal of information and education technology (IJIET),vol. 3, pp. 505–511, Jan. 2013.doi: 10.7763/IJIET.2013.V3.326. b18 X. Li, P. Zhang, D. Song, G. Yu, Y. Hou, and B. Hu, “Eeg based emotion identification using unsupervised deep feature learning,” 2015. b19 M. A. Asghar, M. J. Khan, Fawad, Y. Amin, M. Rizwan, M. Rahman, S. Bad-nava, S. S. Mirjavadi, and S. S. Mirjavadi, “Eeg-based multi-modal emotion recognition using bag of deep features: An optimal feature selection approach,”Sensors (Basel, Switzerland), vol. 19, no. 23, Nov. 2019,issn: 1424-8220.doi:10 . 3390 / s19235218. [Online]. Available: https : / / europepmc . org / articles /PMC6928944. b21 W. Ng, A. Saidatul, Y. Chong, and Z. Ibrahim, “Psd based features extraction for eeg signal during typing task,”IOP Conference Series: Materials Science and Engineering, vol. 557, p. 012 032, Jun. 2019.doi: 10.1088/1757- 899X/557/1/012032. b22 M. Ghofrani Jahromi, H. Parsaei, A. Zamani, and D. W. Stashuk, “Cross comparison of motor unit potential features used in emg signal decomposition,”IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26,no. 5, pp. 1017–1025, 2018.doi: 10.1109/TNSRE.2018.2817498. b23 Q. Xiong, X. Zhang, W.-F. Wang, and Y. Gu, “A parallel algorithm framework for feature extraction of eeg signals on mpi,”Computational and Mathematical Methods in Medicine, vol. 2020, pp. 1–10, May 2020.doi: 10 . 1155 / 2020 /9812019. b24 N. Donges. (). “A guide to rnn: Understanding recurrent neural networksand lstm networks,” [Online]. Available: https://builtin.com/data- science/recurrent-neural-networks-and-lstm. (accessed: 24.09.2021).
http://arxiv.org/abs/2307.05955v1
20230712065701
Microscopic origin of quantum supersonic phenomenon in one dimension
[ "Zhe-Hao Zhang", "Yuzhu Jiang", "Hai-Qing Lin", "Xi-Wen Guan" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas" ]
G[4] Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, ChinaUniversity of Chinese Academy of Sciences, Beijing 100049, China[][email protected] Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China NSFC-SPTP Peng Huanwu Center for Fundamental Theory, Xi'an 710127, China[][email protected] of Physics, Zhejiang University, Hangzhou 310058, China[][email protected] Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China NSFC-SPTP Peng Huanwu Center for Fundamental Theory, Xi'an 710127, ChinaHefei National Laboratory, Hefei 230088, ChinaDepartment of Fundamental and Theoretical Physics, Research School of Physics, Australian National University, Canberra ACT 0200, Australia Using the Bethe ansatz (BA) solution, we rigorously determine non-equilibrium dynamics of quantum flutter and revival of an injected impurity with a large initial momentum Q into the one-dimensional (1D) interacting bosonic medium. We show that two types of BA excited eigenstates drastically dominate the oscillation nature of the quantum flutter with a periodicity which is simply given by the charge and spin dressed energies ε_ c,s(0) at zero quasi-momentum τ_ QF = 2π/(|ε_ c(0)|- |ε_ s(0)|). While we also determine quantum revival dynamics with a larger periodicity τ_L = L/(v_ c(Q-k^*)-v_ s(k^*)) than τ_ QF, revealing the quantum reflection of excitations induced by the periodic boundary conditions of a finite length L. Here v_ c,s are the sound velocities of charge and spin excitations, respectively, and k^* is determined by the rapidity of the impurity. Our results reveals a microscopic origin of quantum supersonic phenomenon and shed light on quantum magnon metrology for a measure of the gravitational force. Microscopic origin of quantum supersonic phenomenon in one dimension Xi-Wen Guan August 12, 2023 ===================================================================== Introduction. Quantum many body systems with impurities exhibit rich collective and interference phenomena, ranging from polaron <cit.>, to Bogoliubov-Cherenkov radiation <cit.>, shock wave <cit.>, Bloch oscillations <cit.>, the quantum flutter (QF) <cit.>, etc. When an impurity is injected into a fermionic (or bosonic) medium with a speed larger than the intrinsic sound velocity, the momentum of such an impurity shows a long time oscillation behavior after a fast decay. This behaviour was named as “quantum flutter”<cit.>, i.e., described by the quasi-particle behavior of oscillations between the polaron-like and exciton-like states in the medium of free Fermi gas or the Tonks-Girardeau Bose gas <cit.>. On the other hand, quantum simulators of many-body phenomena in ultracold atomic systems attract great deal of attention <cit.>. One-dimensional (1D) exactly solvable models of ultracold atoms lay out profound many-body physics <cit.>. In this scenario, the Bethe ansatz (BA) solutions of integrable Fermi and Bose gases <cit.> provide deep insights into the essence of the quasiparticles, such as polaron for slowly moving impurities <cit.>, fractionalized magnon <cit.> in spin excitations, and supersonic impurity dynamics <cit.> of the fast moving impurity. In particular, the 1D multi-component Bose gases with a spin-independent interaction exhibit a striking feature of ferromagnetism <cit.>, which dramatically alters the low energy physics of the the Lieb-Liniger gas <cit.>. In recent years, much effort has been devoted to experimentally manipulating quasiparticle magnons by coupling the ferromagnetic systems with an optical cavity, an external gravitational force, etc. <cit.>. However, a rigorous understanding of the dynamics of magnons in different mediums beyond the mean field is still challenging and highly desirable. In this letter, we report on exact results of QF and revival of the supersonic impurity in the medium of bosonic liquid. Building on the BA and form factor of the 1D two-component Bose gas <cit.>, we rigorously calculate the time evolutions of the impurity momentum, momentum distribution and correlation function, allowing us to determine exact microscopic states of QF and revival. The QF is caused mainly by the coherent oscillations between the two sets of BA eigenstates with reshuffling the quantum numbers beyond the ground state, i.e. the magnon- and exciton-like states simply gives an explicit expression of the periodicity τ_ QF = 2π/(|ε_ c(0)|-|ε_ s(0)|). Here ε_ c,s are respectively the dressed energies of charge and spin which can be determined by the thermodynamical BA equations <cit.>. The evolution of impurity momentum coincides with the motion of the wave packet center, displaying a novel feature of quantum snaking behavior. Whereas, the finite-size energies of magnon-like states elegantly determines a revival dynamics with a larger period τ_L = L/(v_ c(Q-k^*)-v_ s(k^*)) than τ_ QF that significantly reveals a quantum reflection of excitations. Here v_ c,s are the sound velocities of charge and spin at particular momenta Q-k^* and k^*, which can be precisely obtained by the BA equations. The model and exact solution. We consider the 1D two-component Bose gas described by the Hamiltonian H = ∫_0^L dx ( ħ^2/2m∑_σ∂Ψ̂_σ^†∂Ψ̂_σ + c∑_σσ'Ψ̂_σ^†Ψ̂_σ'^†Ψ̂_σ'Ψ̂_σ), for N bosons of the same mass m with two internal spin states σ=↑,↓ confined to a 1D system of length L via a δ-function potential. Where Ψ̂_σ(x) is the field operator of the bosons with pseudo-spin σ. The interaction strength c=-2/a_ 1D is tunable via an effective 1D scattering length a_ 1D<cit.>. We will use the dimensionless interaction strength γ=cL/N in the following discussions. For convenience, we let 2m=ħ =1. The model (<ref>) was solved <cit.> by means of the nested BA <cit.> for arbitrary M down-spins, also see <cit.>. Using species selective atomic systems, the related models were studied experimentally on quantum transport <cit.>, dynamics of impurities <cit.> and Bloch oscillation <cit.>. The eigen functions of the model (<ref>) can be written as the BA wave function <cit.> determined by N wave numbers {k_i} with i=1,2,⋯,N and M spin rapidities {λ_α} with α=1,2,⋯,M satisfying the BA equations I_i =1/2πk_iL-1/2π∑_α=1^Mθ(2k_i-2λ_α) +1/2π∑_j=1^Nθ(k_i-k_j), J_α=1/2π∑_j=1^Nθ(2λ_α-2k_j) -1/2π∑_β=1^Mθ(λ_α-λ_β), where θ(x)=2atan(x/c). The quantum numbers are integers or half-integers, I_j∈Z+N-M-1/2, J_α∈{-N-M-1/2,-N-M-1/2+1,⋯,N-M-1/2}. For a given set of quantum numbers {I_N, J_M}, the Eqs. (<ref>) determine the highest weight and non-highest weight states |I_N, J_M,ℓ⟩ =(Ŝ^-)^ℓ|I_N, J_M,0⟩ with ℓ = 0 and ℓ = 1,⋯,N-2M, respectively, see Supplemental material (SM) <cit.>. The energy and momentum of the model are given by E=∑_i k_i^2, K = 2π/L(∑_iI_i-∑_α J_α), respectively. Despite the BA equations (<ref>) have been known for a long time, analytical results of ferromagnetism, impurity dynamics and transport of the model are still formidable. Initial state, density matrix and form factor. We consider the ground state of N-1 spin-up delta-function interacting bosons |⟩ as a medium, and one spin-down atom with a wave function ϕ_↓(x) as the injected impurity. This gives an initial state |_ I⟩ = ∫ d x ϕ_↓(x) Ψ̂_↓^†(x) |⟩. Thus the time evolution of density matrix of the spin-down boson is given by ρ_↓(x,x',t) = ⟨_ I|Ψ̂_↓^†(x,t) Ψ̂_↓^(x',t) |_ I⟩/⟨Φ_ I⟩ = ∑_α,α' e^ i(E_α-E_α')t A^*_α A_α'ρ_↓^αα'(x,x'), where Ψ̂_σ^†(x,t)= e^ iĤtΨ̂_σ^†(x) e^- iĤt, ρ_↓^αα'(x,x')= ⟨α|Ψ̂_↓^†(x) Ψ̂_↓^(x') |α'⟩ /√(⟨α⟩⟨α' ⟩) is the matrix element of the density operator and A is the overlap between the initial state and the eigenstate, A_α =⟨α| _ I|/⟩√(⟨α|⟨%s|%s⟩⟩_ I). Here |α⟩ is the highest weight state denoted as |I_N,J,0⟩ or the non-highest weight state |I_N, J_0,1⟩, E_α is the energy of the state |α⟩, here J_0 denotes an empty set. The latter was largely ignored in literature, here we notice its nontrivial contributions to the impurity dynamics, see SM <cit.>. Using the determinant representation <cit.>, we precisely calculate the overlapping integral A_α and the density matrix ρ_↓^αα'(x,x'). Without losing generality, we will take the impurity wave packet to be a plane wave with momentum Q, i.e. ϕ_↓(x)= e^ iQx, and the corresponding initial state has a fixed momentum, K=Q. The momentum is conserved in the states |α⟩ and |α'⟩. This naturally gives a selection rule of the overlap A and the matrix element, for example, the sum rule of the overlap integral A is ∑_α|A_α|^2=1, To our end, we also need to express the overlap integral A, the density matrix element and norms of the states in terms of the form factors <cit.>. With the help of the form factor and the sum rule of A, we may select enough essential states such that the sum rule is very close to 1 (above 95, see SM <cit.> for details). Quantum flutter and impurity dynamics. The time evolution of the impurity momentum is given by K_↓(t) = ∑_k_n k_n P_↓(k_n,t) at the Fourier component k_n=2nπ/L, n=0,±1,⋯. Here the probability of impurity in momentum space reads P_↓(k_n,t) = ∫ρ_↓(x,0,t) e^ ik_nx d x. Using the BA solution of Eq. (<ref>) and its form factor, see <cit.>, we rigorously calculate the the time evolution of the distribution P_↓(k_n,t) in FIG <ref> (a), showing a QF wave-like oscillation near k_n=0 and the revival at the original momentum k_n=Q. While FIG. <ref> (b) shows that the impurity momentum K_↓(t) oscillates soon after a quick decay, also see later discussion in FIG. <ref> (d) and FIG. <ref> (c). The QF dynamics drastically comes from the coherent transition between the magnon excitations and particle-hole collective excitations resulted in from the impurity scattering with the atoms in the interacting medium, which we simply call magnon- and exciton-like states, respectively. By using the BA equations Eq. (<ref>) and its form factor forms of the overlap integrals, we first rigorously determine the microscopic states for such evolution dynamics of the QF in the Bose gas (<ref>). From Eq. (<ref>), we observe that the oscillation behavior is mainly governed by multiple pairs of magnon and exciton states {|α⟩,|α'⟩}, leading to an oscillation period 2π/|E_α-E_α'|. This naturally suggests a mechanisms for the supersonic behaviour, i.e. coherent transition between the states {|α⟩,|α'⟩}. Being guided by the sum rule weights, in FIG. <ref> (a), we selected 56 pairs of states {|α⟩,|α'⟩} with the high sum rule weights which essentially contribute to the dynamics of P_↓(k_n,t) and K_↓(t). In the FIG. <ref> (a) we used the same setting as that for the FIG. <ref>. While in FIG. <ref> (b) we present a schematic illustration of the magnon- and exciton-like states, see the left states |α⟩ and the right state |α'⟩. In FIG. <ref> (c), we give the high weights of these excitation pairs |A_α|^2+|A_α'|^2, showing the contributions of each magnon-exciton pairs (MEP) to the dynamical evolution of the impurity. In FIG. <ref> (d), we observe that the K_↓ (t) obtained from the selected MEPs (red dotted line) coincides with the numerical result (black solidline) from the BAwave function. This remarkably manifests that the microscopic MEPs result in the coherent transitions between |α⟩ and |α'⟩ states with the same energy difference, see <cit.>. We further observe that the energy difference between the two states in the MEPs is given by Δ E_ QF = |ε_ c(0)|-|ε_ s(0)|, where the charge dressed energy ε_ c(0)<0, and ε_ c(k)= k^2 - μ - ∫_-k_0^k_0 a_2(k-k')ε_ c(k') dk', ε_ s(λ)= - ∫_-k_0^k_0 a_1(k-λ)ε_ c(k) dk, here a_n(x)=1/2πnc/(nc/2)^2+x^2, μ is the chemical potential, k_0 is the Fermi point (cut-off) of charge quasimomentum k, ε_ c(k_0)=0 and ε_ s(±∞)=0. This elegantly gives the periodicity of the QF τ_ QF = 2π/|ε_ c(0)|-|ε_ s(0)|. In FIG <ref> (a), we show that the oscillation period essentially depends on the interaction strength. Our analytical result (blue solid line) Eq. (<ref>) agrees well with the numerical result (circles). We observe that the period of the QF decreases with an increase of the interaction γ. For a strong coupling, we have τ_ QF = 2π t_ F(1+20/3γ) (long dashed line), where the t_ F=1/E_ F with the Fermi energy E_ F=k_ F^2. In the Tonks limit, i.e. γ→∞, Δ E_ QF= E_ F and thus τ_ QF= 2π t_ F (blue dotted line). Based on the variational wave function theory, a mechanism of the QF was also studied in term of the spectra of plasmon and magnon <cit.>. Moreover, we note that the period of the QF dose not depend on the injected momentum Q. We calculate the QF dynamics for several values of the injected momenta and find that the QF always appears for Q>k_F and oscillating amplitude rises slightly as the increase of Q. For different Qs, the oscillating amplitudes of QF decay to the same saturated value, see FIG <ref> (b). Furthermore, using the Gaussian impurity wave packet <cit.>, we further observe that the motion of mass center of the impurity X_↓(t)= ⟨_ I|x̂(t) |_ I⟩/⟨_ I|$⟩ coincide with the evolution of the impurity momentum, namely,1/2∂_t X_↓(t) = K_↓(t), showing a particle nature of the QF Quantum revival. FIG. <ref> (a) also shows a larger periodic revival behavior appearing along the linek_n=Q. This striking feature of quantum revival is also observed in evolution of momentumK_↓(t)in FIG. <ref> (b). The cause of this revival behavior is essentially related to the quantum reflection of excitations induced by the periodic boundary conditions of the finite lengthL. Using BA equations (<ref>), we determine a set of pairs of magnon-like states with the same minimum momentum difference that have large sum rule weights for the revival dynamics of the impurity, see FIG. <ref> (a). When an impurity is injected into the medium of the Bosonic liquid, then the momenta exchanges between the magnon impurity and medium go back and forth. Consequently, the transitions between two magnon-like states with the same lowest minimum momentum difference strikingly determine the revival dynamics of the impurity. By the calculation from the form factor, the sum rule and the BA solution of the model, we further determine that pairs of magnon states have a minimum momentum differenceΔ p = 2π/Llead to an energy differenceΔ E_L =[v_ c(Q-k^*)-v_ s(k^*)] Δ p, wherev_ c,s(p)is the sound velocity of the charge and magnon excitations, respectively. Here the sound velocities are given byv_ c,s(p)=∂ E_ c,s(p)/∂ pandE_ c,s(p)are the single particle dispersions of charge and spin, respectively. Consequently, the period of quantum revival is given by τ_L =L/v_ c(Q-k^*)-v_ s(k^*), where whenQ>k_ F, thek^*can be numerically determined by the magnon-like state with the largest weight|A_α|^2, namely,k^*=(1-2J/N)k_ F, here0<k^*<k_ F. Here the momentum transitions from the medium to the impurity results in the periodic nature of the revival (<ref>). It is also observed in the single particle propagatorG_↓(x,t)=⟨Ψ̂^_↓(x,t) Ψ̂^†_↓(0,0) ⟩, see FIG. <ref> (b). The agreement among these results of periodicity firmly confirm the nature of the revival, i.e. the dynamical oscillation between the magnon pairs due to finite-size effect. In summary, using BA and form factor, we have rigorously determined microscopic states for QF and revival of a supersonic impurity in 1D medium of interacting bosons. We have obtained an explicit expression of the period of the QF (<ref>), revealing deep insights into the coherent nature of the magnon- and exciton-like states in the course of impurity scattering with the medium. Moreover, we have also derived the longer period of quantum revival (<ref>), manifesting the quantum reflection of excitations in term of the oscillations between the two magnon-like states with the largest weight|A_α|^2in Eq. (<ref>). Building on the current experimental capability of realizing the 1D impurity problems <cit.>, measure of the supersonic behaviour of the model (<ref>) can be readily implemented through highly elongated 1D systems of ultracold atoms. Moreover, our results provide an extended knowledge of the quantum supersonic impurities in 1D interacting medium of Luttinger liquid and will further stimulate theoretical and experimental efforts to study various quantum impurity problems with their promising applications in quantum technology of measuring the gravitational force. § ACKNOWLEDGEMENT X.W.G and Y.Z.J. are supported by the NSFC key grant No. 12134015, the NSFC grant No. 12121004, No. 12175290 and the National Key R&D Program of China under grants No. 2022YFA1404102. They also partially supported by the Innovation Program for Quantum Science and Technology 2021ZD0302000, the Peng Huanwu Center for Fundamental Theory, No. 12247103, and the Natural Science Foundation of Hubei Province 2021CFA027. HQL acknowledges NSFC gran No. 12088101 and the computational resources from the Beijing Computational Science Research Center. 39 Schirotzek:2009A. Schirotzek, C.-H. Wu, A. Sommer, and M. W. Zwierlein, Phys. Rev. Lett. 102, 230402 (2009). Nascimbene2009 S. Nascimbène, N. Navon, K. J. Jiang, L. Tarruell, M. Teichmann, J. McKeever, F. Chevy and C. Salomon, Phys. Rev. Lett. 103, 170402 (2009). Combescot2009 R. Combescot and S. Giraud, Phys. Rev. Lett. 101, 050404 (2008). Bruum2010 G. M. Bruun and P. Massignan, Phys. Rev. Lett. 101, 050404 (2010). ZZYan2020SZ. Z. Yan, Y. Ni, C. Robens, and M. W. Zwierlein, Science 368, 190 (2020). Mistakidis2019PRLS. I. Mistakidis, G. C. Katsimiga, G. M. Koutentakis, T. Busch, and P. Schmelcher, Phys. Rev. Lett. 122, 183001 (2019). Henson:2018 B. M. Henson, X. G. Yue, S. S. Hodgman, D. K. Shin, L. A. Smirnov, E. A. Ostrovskaya, X. W. Guan, A. G. Truscott, Phys. Rev. A 97, 063601 (2018). Doyon2017PRLB. Doyon, J. Dubail, R. Konik, and T. Yoshimura, Phys. Rev. Lett. 119, 195301 (2017). SASimmons2020PRLS. A. Simmons, F. A. Bayocboc, J. C. Pillay, D. Colas, I. P. McCulloch, and K. V. Kheruntsyan, Phys. Rev. Lett. 125, 180401 (2020). JianLi2021PRLJ. Li, S. Chockalingam, and T. Cohen, Phys. Rev. Lett. 127, 014302 (2021). Meinert:2017F. Meinert, et. al. Science 356, 945 (2017) EDemler2012NPC. J. M. Mathy, M. B. Zvonarev, and E. Demler, Nat. Phys. 8, 881 (2012). EDemler2014PRLM. Knap, C. J. M. Mathy, M. Ganahl, M. B. Zvonarev, and E. Demler, Phys. Rev. Lett. 112, 015302 (2014). BvandenBerg2016PRL2R. van den Berg, B. Wouters, S. Eliëns, J. De Nardis, R. M. Konik, and J.-S. Caux, Phys. Rev. Lett. 116, 225302 (2016). WSBakr2009NW. S. Bakr, J. I. Gillen, A. Peng, S.Foelling, and M. Greiner, Nature 462, 74 (2009). RMPreiss2015SP. M. Preiss, R. Ma, M. E. Tai, A. Lukin, M. Rispoli, P. Zupancic, Y. Lahini, R. Islam, and M. Greiner, Science 347, 1229 (2015). IBloch2017SC. Gross and I. Bloch, Science 357, 995 (2017). CCChien2015NPC. C. Chien, S. Peotta, and M. Di Ventra, Nat. Phys. 11, 998 (2015). CWeitenberg2011NC. Weitenberg, M. Endres, J. F. Sherson, M. Cheneau, P. Schauß, T. Fukuhara, I. Bloch, S. Kuhr, nature 471, 319 (2011). TaoShi2018PRLY. Ashida, T. Shi, M. C. Bañuls, J. I. Cirac, and E. Demler, Phys. Rev. Lett. 121, 026805 (2018). JPRonzheimer2013PRLJ. P. Ronzheimer, M. Schreiber, S. Braun, S. S. Hodgman, S. Langer, I. P. McCulloch, F. Heidrich-Meisner, I. Bloch, and U. Schneider, Phys. Rev. Lett. 110 (2013). TFukuhara2013NPT. Fukuhara, A. Kantian, M. Endres, M. Cheneau, P. Schauss, S. Hild, D. Bellem, U. Schollwoeck, T. Giamarchi, C. Gross, et al., Nat. Phys. 9, 235 (2013). IBloch2013NT. Fukuhara, P. Schauss, M. Endres, S. Hild, M. Cheneau, I. Bloch, and C. Gross, nature 502, 76 (2013). FSchmidt2018PRLF. Schmidt, D. Mayer, Q. Bouton, D. Adam, T. Lausch, N. Spethmann, and A. Widera, Phys. Rev. Lett. 121, 130403 (2018). MRYang2022CMM. Yang, M. Cufar, E. Pahl, and J. Brand, Condens. Matter 7, 15 (2022). RSChristensen2015PRLR. S. Christensen, J. Levinsen, and G. M. Bruun, Phys. Rev. Lett. 115 (2015). AVashisht2022SPA. Vashisht, M. Richard, and A. Minguzzi, SciPost Phys. 12, 008 (2022). XWGuan2016PRAR. Mao, X. W. Guan, and B. Wu, Phys. Rev. A 94, 043645 (2016). FMassel2013NJPF. Massel, A. Kantian, A. J. Daley, T. Giamarchi, and P. Torma, New J. Phys. 15, 045018 (2013). SPeotta2013PRLS. Peotta, D. Rossini, M. Polini, F. Minardi, and R. Fazio, Phys. Rev. Lett. 110, 015302 (2013). NJRobinson2020JSMN. J. Robinson, J.-S. Caux, and R. M. Konik, J. Stat. Mech. 2020, 013103 (2020). JCaux2016PRLN. J.Robinson, J.-S. Caux, and R. M. Konik, Phys. Rev. Lett. 116, 145302 (2016). HFroeml2019PRLH. Fröml, A. Chiocchetta, C. Kollath, and S. Diehl, Phys. Rev. Lett. 122, 040402 (2019). MBZvonarev2007PRLM. B. Zvonarev, V. V. Cheianov, and T. Giamarchi, Phys. Rev. Lett. 99, 240404 (2007). BPozsgay2012JPAB. Pozsgay, W.-V. v. G. Oei, and M. Kormos, J. Phys. A: Math. Theor 45, 465007 (2012). NJRobinson2017JSMN. J. Robinson and R. M. Konik, J. Stat. Mech., 063101 (2017). Cazalilla:2011M. A. Cazalilla, R. Citro, T. Giamarchi, E. Orignac and M. Rigol, Rev. Mod. Phys. 83 1405 (2011). Guan:2013X. W. Guan, M. T. Batchelor and C. Lee, Rev. Mod. Phys. 85, 1633 (2013). Guan:2022X. W. Guan, P. He, Rep. Prog. Phys. 85 11400 (2022). Guan2015CPBY. Jiang, Y.-Y. Chen, and X.-W. Guan, Chinese Phys. B 24, 050311 (2015). TaoShi2021PRXP. E. Dolgirev, Y.-F. Qu, M. B. Zvonarev, T. Shi, and E. Demler, Phys. Rev. X 11, 041015 (2021). Andrei:1983N. Andrei, K. Furuya, and J. H. Lowenstein, Rev. Mod. Phys. 55, 331 (1983). Lieb-LinigerE. H. Lieb and W. Liniger, Phys. Rev. 130, 1605 (1963). Yang:1967C. N. Yang, Phys. Rev. Lett. 19, 1312 (1967). Gaudin:1967M. Gaudin, Phys. Lett. A 24, 55 (1967). Vijayan:2020J. Vijayan, et. al. Nature 565, 56 (2020). Senaratne:2022R. Senaratne, et. al. Science 376, 1305 (2022). McGuire:1965J. B. McGuire, J. Math. Phys. 6, 432 (1965); J. Math. Phys. 7, 123 (1966). Guan-FPX.-W. Guan, Front. Phys, 7, 8 (2012). TLSchmidt2019PRLT. L. Schmidt, G. Dolcetto, C. J. Pedder, K. Le Hur, and P. P. Orth, Phys. Rev. Lett. 123, 075302 (2019). ASDehkharghani2018PRLA. S. Dehkharghani, A. G. Volosniev, and N. T. Zinner, Phys. Rev. Lett. 121, 080405 (2018). SIMistakidis2019NJPS. I. Mistakidis, F. Grusdt, G. M. Koutentakis, and P. Schmelcher, New J. Phys. 21, 103026 (2019). JCaux2009PRAJ.-S. Caux, A. Klauser, and J. van den Brink, Phys. Rev. A 80 (2009). JNFuchs2005PRLJ. N. Fuchs, D. M. Gangardt, T. Keilmann, and G. V. Shlyapnikov, Phys. Rev. Lett. 95, 150402 (2005). Fuchs:2005J. N. Fuchs, D. M. Gangardt, T. Keilmann, and G. V. Shlyapnikov, Phys. Rev. Lett. 95, 150402 (2005). Zvonarev:2005M. B. Zvonarev, V. V. Cheianov, and T. Giamarchi, Phys. Rev. Lett. 99, 240404 (2007). Batchelor:2006M. T. Batchelor, M. Bortz, X.-W. Guan and N. Oelkers, J. Stat. Mech., P03016 (2006). Barfknecht:2018 R. E. Barfknecht, A. Foerster, and N. T. Zinner, Few-Body Syst 59:22 (2018). Patu:2018 O. I. Patu, A. Klümper, and A. Foerster, Phys. Rev. Lett. 120, 243402 (2018). Eisenberg:2002E. Eisenberg, and E. H. Lieb, Phys. Rev. Lett. 89, 220403 (2002) Guan-Batchelor-TakahashiX.-W. Guan, M. T. Batchelor and M. Takahashi, Phys. Rev. A 76, 043617 (2007). Olshanii_PRL_1998M. Olshanii, Phys. Rev. Lett. 81, 938 (1998). Li-YQ:2003 Y.-Q. Li, S.-J. Gu, Z.-J. Ying, U. Eckern, EuroPhys. Lett. 61, 368 (2003). CNYang1967PRLC. N. Yang, Phys. Rev. Lett. 19, 1312 (1967). Sutherland1968PRLB.Sutherland, Phys. Rev. Lett. 20, 98 (1968). Palzer:2009S. Palzer, C. Zipkes, C. Sias and M. Köhl, Phys. Rev. Lett. 103, 150601 (2009). Catani:2012J. Catani, G. Lamporesi, D. Nailk, M. Gring, M. Inguscio, F. Minardi, A. Kantian and T. Giamarchi, Phys. Rev. A 85, 023623 (2012). SM In this supplemental material, we introduce the form factor formula of Bethe ansatz states and present in detail the calculation of the time evolution of the impurity dynamics as well as analytical analysis on the oscillation features of quantum flutter and revival. Caux:2006J.-S. Caux, and P. Calabrese, Phys. Rev. A 74, 031605(R) (2006) Caux:2007J.-S. Caux, P. Calabrese, and N. A. Slavnov, J. Stat. Mech., P01008 (2007). Caux:2009J.-S. Caux, J. Math. Phys. 50, 095214 (2009). Song:2022-1S. Cheng, Y.-Y. Chen, X.-W. Guan, W.-L. Yang, R. Mondaini, and H.-Q. Lin, arXiv:2209.15221v1. Li:2023R.-T. Li, S. Cheng, Y.-Y. Chen, X.-W. Guan, arXiv:2303.09208. Microscopic origin of quantum supersonic phenomenon in one dimension   — Supplementary materials   Zhe-Hao Zhang, Yuzhu Jiang, Hai-Qing Lin, and Xi-Wen Guan   § S1. THE ONE-DIMENSIONAL TWO-COMPONENT BOSE GAS The model Eq. (1) in the main text describes the one-dimensional (1D) two-component Bose gases with a delta-function interaction. As a solvable many-body problem, its Hamiltonian reads Ĥ =- ∑_i=1 ^N∂^2/∂ x_i^2+2c∑_i<jδ(x_i-x_j), whereNis the total particle number,cis the interaction strength andLis length of the system. Here we take the periodic boundary conditions and the total momentumK̂is conserved. The eigenstate ofNparticles withMspin-down bosons of the model Eq. (1) in the main text is given by |⟩ = ∫ dx(x) Ψ̂_↓ ^†(x_1) Ψ̂_↓ ^†(x_2) ⋯Ψ̂_↓ ^†(x_M) ×Ψ̂_↑ ^†(x_M+1) Ψ̂_↑ ^†(x_M+2)⋯Ψ̂_↑ ^†(x_N) |0⟩, whereΨ̂_↑,↓^†(x)are the field operators of spin-up and spin-down bosons, respectively,(x)denotes the Bethe ansatz (BA) wave function of the first quantized Hamiltonian (<ref>). Here we denotedx={x_1,x_2,⋯,x_N},∫ dx=∫_0^L dx_1 ∫_0^L dx_2 ⋯∫_0^L dx_Nand|0⟩stands for the vacuum state. This model was exactly solved by the BA <cit.> and the Bethe ansatz equations (BAE) are given by e^ i k_jL = - ∏_j'=1^N k_j-k_j'+ i c/k_j-k_j'- i c∏_α = 1^M k_j-λ_α- ic /2/k_j-λ_α+ ic/2, ∏_j=1^N λ_α-k_j- ic/2/λ_α-k_j+ ic/2 =-∏_β=1^M λ_α-λ_β- ic/λ_α-λ_β+ ic, wherek_jis the wave number,λ_αis the spin rapidity,j=1,2⋯,Nandα=1,2⋯,M. Eqs. (2) in the main text were obtained from the logarithm form of the BAE (<ref>). Both the energy and momentum are conserved and they are given by E =∑_j=1^N k^2_j,    K =∑_j=1^N k_j, respectively. Moreover, the total momentum can be calculated by the quantum numbers of the logarithm form of BAE, see Eq. (3) in the main text. § S2. QUANTUM DYNAMICS OF THE SUPERSONIC IMPURITY We first discuss the evolution of impurity momentum injected into a bosonic quantum medium. The medium is the ground state of ofN-1spin-up bosons|⟩, and the impurity is a spin-down particle with a wave functionϕ_↓(x). We define the initial state of the supersonic impurity |_ I⟩ = ∫_0^L d x ϕ_↓(x) Ψ̂_↓^†(x) |⟩. The time evolution of the impurity momentum is defined by K_↓(t) = ∑_k_n k_n P_↓(k_n,t), whereP_↓(k_n,t)is the momentum distribution P_↓(k_n,t) = 1/L∫_0^L dx ∫_0^L dx' e^- ik_n (x-x') ×⟨_ I|Ψ̂_↓^†(x,t) Ψ̂_↓^(x',t) |_ I⟩/⟨_ I⟩,Ψ̂_↓^†(x,t) = e^ i(Ĥt - K̂x)Ψ̂_↓^†(0) e^- i(Ĥt - K̂x)andk_n = 2nπ/L,n=0,±1,⋯. Insert three complete sets of eigenstates intoP_↓(k_n,t), we get P_↓ (k_n,t) = k_n L ∑_αα'β e^ i(E_α - E_α')tδ_k_n,K_α-K_βδ_K_α,K_α' ×_ Iα⟨α|Ψ̂_↓^†(0)|β⟩⟨β|Ψ̂_↓^(0) |α'⟩α'_ I/⟨_ I|⟨%s|%s⟩⟩α⟨β|⟨%s|%s⟩⟩α', where|α⟩,|α'⟩and|β⟩are eigenstates of the Hamiltonian and momentum, namely,H|α⟩=E_α|α⟩andK̂|α⟩=K_α|α⟩. Thus the time evolution of impurity momentum can be written as K_↓(t) = L ∑_αα' e^ i(E_α - E_α')t K_αα', K_αα' = ∑_β (K_α-K_β) A^*_α B^*_αβB_α'βA_α'δ_K_α,K_α', whereAis the overlap between the initial state and the eigenstate,A_α =⟨α| _ I|/⟩√(⟨α|⟨%s|%s⟩⟩_ I)and overlapB_αβ = ⟨β|Ψ̂^_↓(0) |α⟩/√(⟨α|⟨%s|%s⟩⟩β). The sum rule ofA_αandB_αβare ∑_α |A_α|^2=1,     L∑_β |B_αβ|^2=1, respectively, and|A_α|^2(B_αβ^2) is the weight of eigenstate|α⟩in the overlap (density matrix element). Using the eigenstates of Hamiltonian (<ref>), we can calculate the eigenvalues of the Hamiltonian, the overlapA_αand the matrix elementsB_αβin terms of determinant representation of the norms and form factors. Consequently, we may obtain the evolutions of the momentum and momentum distributions. In particular, guided by the sum rules, we can select the microscopic states with large sum rule weights that essentially comprise the oscillation features of the QF and revival dynamics. We give in details the calculations of the above mentioned quantities in next sections. § S3. METHOD FOR CALCULATING TIME EVOLUTION OF IMPURITY MOMENTUM §.§ S3.1 Selection of the eigenstates for quantum flutter In the BA equations [Eqs. (<ref>) in the main text],IandJdenote the quantum numbers of the charge and spin degrees of freedom, respectively, whereI=I_N={I_1,I_2,⋯,I_N},J=J_M={J_1,J_2,⋯,J_M}andMis the number of spin-down particles. For a given set of quantum numbers{I_N, J_M}, the BA equations uniquely determine the wave numbers and spin rapidities{k_1,k_2,⋯,k_N; λ_1, λ_2, ⋯, λ_M}. Consequently, the BA solutions giveN-2M+1eigenstates,|I_N, J_M,ℓ⟩=(Ŝ^-)^ℓ|I_N, J_M,0⟩, whereℓ=0,1,2,⋯,N-2M. Here we denote|I_N, J_M,0⟩as a highest weight state, i.e.,Ŝ^+|I_N, J_M,0⟩=0. The states with non-zero values of theℓare non-highest weight states. In the above, we defined the spin operatorsŜ^-=∫ dx Ψ̂^†_↓(x) Ψ̂_↑(x)andŜ^+=∫ dx Ψ̂^†_↑(x) Ψ̂_↓(x). The total spinSand its projection inz-directionS^ zare good quantum numbers of the state|I_N, J_M,ℓ⟩, namely, S=N/2-M,   S^ z=N/2-M-ℓ. There are three sets of complete eigenstates{|α⟩},{|α'⟩}and{|β⟩}which were inserted in the calculation of impurity momentumK_↓(t)Eq. (<ref>). The eigenstates include all of the highest and non-highest weight ones. Guided by the sum rules, we need to select enough states to calculate the dynamical evolutions of the momentumK_↓(t)and momentum distributions. Without losing accuracy, the following selection rules were used to essentially simplify our numerical task: (i) The total particle number is a good quantum number of the initial state |_ I⟩, such that {|α⟩} and {|α'⟩} consist of the states |I_N,J_M,ℓ⟩ with total particle number N. However, the state {|β⟩} in Eq. (<ref>) must be the state |I_N-1,J_M',ℓ⟩ with the total particle number N-1, respectively. (ii) The total spin is not a good quantum number of the initial state |_ I⟩, while S^ z is a good quantum number, Ŝ^ z|_ I⟩=(N/2-1)|_ I⟩. Together with the selection rule (i), the possible states of {|α⟩} and {|α'⟩} are |I_N,J=J_1,0⟩ and |I_N,J_0,1⟩. Whereas the state {|β⟩} relates to the state |I_N-1,J_0,0⟩, where J_0 is an empty set. (iii) When the impurity wave function ϕ_↓(x) is a plane wave with a fixed momentum Q, the total momentum is also a good quantum number of |_ I⟩, K̂|_ I⟩ = Q |_ I⟩, so that I,J_0,1_ I =I,J,0_ I=0 when the quantum numbers do not satisfy K=Q according to Eq. (<ref>) in the main text. We only need to calculate the states with K_α=K_α'=Q in our study. Based on these selection rules, we need to obtain the states|I_N,J,0⟩,|I_N,J_0,1⟩and|I_N-1,J_0,0⟩(J_0is defined in selection rule (ii)). We will give these states in the following study. For the states with allNparticles spin-up, we give a set of quantum numbersI_N, get a set of wave numbers{k_j}from the BA equations (<ref>) and find the wave function of this eigenstate to be |I,J_0,0⟩ = ∫ dx_0(x) Ψ̂^†_↑(x_1) …Ψ̂^†_↑(x_N) | 0 ⟩, _0(x) = 1/√(N!)∑_ P^ (-1)^ P e^ i∑_jx_j k_ P_j ×∏_i<j [k_ P_i-k_ P_j+ ic sign(x_j-x_i)], where Pare the permutations of{1,2,⋯,N}. The total spin of this state isS=S^ z=N/2. In fact,|I,J_0,0⟩is the eigenstate of the Lieb-Liniger model. There are two kinds of eigenstates with one spin-down particle, the highest weight states|I,J,0⟩and the non-highest weight states|I,J_0,1⟩. For the highest weight state, a given set of quantum numbers{I_N,J}determines a unique solution of the BA equations (<ref>), namely, the wave numbers and spin rapidity{k_1,k_2,⋯,k_N; λ}. Then we can have explicit forms of different wave functions. The highest weight state is given by |I,J,0⟩ = ∫ dx_1(x) Ψ̂^†_↓(x_1) …Ψ̂^†_↑(x_N) | 0 ⟩, _1(x) = ∑_l=1^N1/√(N!)[ ∑_ P^ (-1)^ P e^ i∑_jx_jk_ P_j ×∏_i<j^N [k_ P_i-k_ P_j+ ic  sign(x_j-x_i)] ×∏_j ≠ l^[λ-k_ P_j+ ic/2 sign(x_l-x_j)] ]. The total spin of this state isS=S^ z=N/2-1. Using the relation Eq. (<ref>),|I,J_0,1⟩=Ŝ^-|I,J_0,0⟩, the non-highest weight states is given by |I,J_0,1⟩ = ∑_l=1^N ∫ dx_0(x) ×Ψ̂^†_↑(x_1) …Ψ̂^†_↓(x_l) …Ψ̂^†_↑(x_N) | 0 ⟩. The total spin of this stateS = N/2andS^ z = N/2-1. §.§ S3.2 Matrix element Based on the discussions above, we need to calculateA_αandB_α,βfor the time evolution of impurity momentumK_↓(t). Using the specific forms of the wave functions of the relevant states Eqs. (<ref>-<ref>), and following the method <cit.>. we can directly calculateA_αandB_α,β. Explicitly, we have A_α = ⟨α|_ I|⟩/√(⟨α|⟨%s|%s⟩⟩_ I) =∫ dx ⟨α|ϕ_↓(x) Ψ^†_↓(x) |⟩/√(⟨α|⟨%s|%s⟩⟩_ I) =∫ e^- i K_α x dx ϕ_↓(x) ⟨α|Ψ^†_↓(0) |⟩/√(⟨α|⟨%s|%s⟩⟩_ I), ⟨_ I| ⟩ = ∫ d y ϕ_↓^*(y) ⟨|Ψ̂_↓^(y) ∫ d x ϕ_↓(x) Ψ̂_↓^†(x) |⟩ = ∫ d x |ϕ_↓(x)|^2 ⟨|.⟩ We further calculate norms and overlaps, where|⟩ = |I_N-1,J_0,0⟩,|α⟩ = |I_N,J,0⟩or|α⟩ = Ŝ^-|I_N,J_0,0⟩=|I_N,J_0,1⟩. The norm of the state|I,J_0,0⟩is given by ⟨I_N,J_0,0|=⟩∏_i<j [(k_i-k_j)^2+c^2] det(𝒢), 𝒢_ij = δ_i,j[L+∑_l=1^Nϕ_1(k_i-k_l)]-ϕ_1(k_i-k_j), ϕ_n(u) = 2cn/n^2u^2+c^2, where{k_1,k_2,⋯,k_N}are the solution of BA equation (<ref>) with the quantum numbersI_N. The norm of non-highest weight state|I,J_0,1⟩can also be calculated by using Eq. (<ref>), namely, ⟨I_N,J_0,1|=⟩⟨I_N,J_0,0|Ŝ^+Ŝ^-|I_N,J_0,0⟩ =N⟨I_N,J_0,0|.⟩ Then the norm of the state|I,J,0⟩is given by the following equation ⟨I,J,0| ⟩ = |1/- ic∏_j=1^N [λ-k_j- ic'] ∏_i<j^ [k_i-k_j+ ic] |^2 × c det𝒥, where Jis aN+1-dimensional matrix, explicitly, 𝒥 = [ J_kk J_kλ; J_λ k J_λλ; ]_N+1, (J_kk)_ij = δ_ij[L+∑_m=1^Nϕ_1(k_i-k_m)-ϕ_2(k_i-λ)] -ϕ_1(k_i-k_j), (J_kλ)_i,N+1 = ϕ_2(k_i-λ),   (J_λ k)_N+1,j =-ϕ_2(k_j-λ), (J_λλ)_N+1,N+1 = ∑_m=1^Nϕ_2(k_m-λ), and{k_1,k_2,⋯,k_N; λ}are the solution of BA equation (<ref>) with the quantum numbersI_NandJ. To calculate⟨α|Ψ^†_↓(0) |⟩we need the matrix elements⟨I'_N-1,J_0,0| Ψ̂_↓(0) |I_N,J,0⟩and⟨I'_N-1,J_0,0|Ψ̂_↓(0)|I_N,J_0,1⟩for the highest and non-highest weight|α⟩, respectively. The matrix element⟨I'_N-1,J_0,0|Ψ̂_↓(0)|I_N,J,0⟩is give by ⟨I'_N-1,J_0,0|Ψ̂_↓(0)|I_N,J,0⟩ =√(N)(N-1)! detℳ ×∏_i>j(k_i-k_j+ ic)/∏_l>m(q_l-q_m+ ic)- ic/∏_j(λ-k_j- ic'). Here the(N-1)×(N-1)matrixℳhas elementsℳ_jk=M_jk-M_N,k, M_jk = t(q_k-k_j) h_2(λ-k_j) ∏_m=1^N-1h_1(q_m-k_j)/∏_m=1^Nh_1(k_m-k_j) + t(k_j-q_k) h_2(k_j-λ) ∏_m=1^N-1h_1(k_j-q_m)/∏_m=1^Nh_1(k_j-k_m), h_n(u) =u+ ic/n,      t(u)=-c/u(u+ ic), where{q_1,q_2,⋯,q_N-1}are the solution of the BA equations (<ref>) with the quantum numbersI_N-1. For the matrix element of the non-highest weight state|α⟩, we have ⟨I'_N-1,J_0,0|Ψ̂_↓(0)|I_N,J_0,1⟩ =⟨I'_N-1,J_0,0|Ψ̂_↓(0) Ŝ^-|I_N,J_0,0⟩ =N⟨I'_N-1,J_0,0|Ψ̂_↑(0) |I_N,J_0,0⟩, where⟨I'_N-1,J_0,0|Ψ̂_↑(0) |I_N,J_0,0⟩is the matrix element of the Lieb-Liniger model ⟨I'_N-1,J_0,0|Ψ̂_↑(0) |I_N,J_0,0⟩ = (N-1)!√(N)∏_i>j(k_i-k_j+ ic)/∏_l>m(q_l-q_m+ ic) det𝒮, where𝒮_i,j = S_i,j-S_N,j, S_ij = t(q_j-k_i) ∏_m=1^N-1h_1(q_m-k_i)/∏_m=1^Nh_1(k_m-k_i) -t(k_i-q_j) ∏_m=1^N-1h_1(k_i-q_m)/∏_m=1^Nh_1(k_i-k_m), h_n(u) =u+ ic/n,   t(u)=-c/u(u+ ic). The above determinant forms are convenient for us to perform numerical calculations. In order to calculateB_α,βB_αβ=⟨β|Ψ̂_↓^(0) |α⟩/√(⟨α|⟨%s|%s⟩⟩β), we need to calculate norms⟨α|$⟩, ⟨β|$⟩ and the overlap⟨β|Ψ̂_↓^(0) |α⟩. Where|β⟩ = |I'_N-1,J_0,0⟩and is the Bethe state ofN-1particles with all spin up. Similar to the calculation ofA_α, here|α⟩also involves the highest or non-highest weight states, namely,|α⟩ = |I_N,J,0⟩and|α⟩=|I_N,J_0,1⟩, respectively. The norms can be calculated by Eqs. (<ref>, <ref>). Similarly, for the highest weight state|α⟩we can calculate the matrix element by using Eq. (<ref>) ⟨β|Ψ̂_↓(0)|α⟩ =⟨I'_N-1,J_0,0|Ψ̂_↓(0) |I_N,J,0⟩. When|α⟩is the non-highest weight state ⟨β|Ψ̂_↓(0)|α⟩ = ⟨I'_N-1,J_0,0|Ψ̂_↓(0) Ŝ^-|I_N,J_0,0⟩ = N⟨I'_N-1,J_0,0|Ψ̂_↑(0) |I_N,J_0,0⟩ , which is given by Eq. (<ref>). §.§ S3.3 Magnon- and exciton-like states The magnon- and exciton-like states essentially comprise the feature of QF and quantum revival phenomena, see FIG.2 in the main text and FIG. <ref>. In the highest weight states|I_N,J,0⟩, we see clearly that there exists a clear structure of “Fermi sea” and there is one particle with quantum numberI_ poutside the “Fermi sea”. The latter is referred as emitted particle. The dots and arrows denote quantum numbersIandJ, respectively. We regard the state without hole inside the “Fermi sea” as magnon-like state, FIG. <ref> (a), while the state with only one hole in the deep Fermi sea as exciton-like state, FIG. <ref> (b). TheIof magnon- and exciton-like states have the following form I^ m={I^ m_0,I^ m_1,⋯,I^ m_N-2,I^ m_ p}, I^ e={I^ e_0,I^ e_1,⋯,I^ e_h-1,I^ e_h+1,⋯,I^ e_N-1,I^ e_ p}, respectively. Here,I_ℓare the quantum numbers in the “Fermi sea”,I_ℓ = I_0+ℓ,I_0is the starting quantum number of the “Fermi sea”,I_ pis the quantum number of particle excitation andI^ e_his location of hole in the exciton-like states. Note that the quantum numberJis fixed for a givenIof the emitted particle when the impurity is injected with a large momentQinto the medium of the Lieb-Liniger Bose gas. This is mainly because of the conservation of momentum. We denote the quantum number of spin-down particle asJ^ mandJ^ efor magnon- and exciton-like states, respectively. We observe that in the magnon-like states the starting quantum numberI^ m_0 =-N/2, while in the exciton-like states, the starting quantum numberI^ e_0 =1-N/2. [In the TABLE <ref>, the sum rule of magnons (excitons) also involves the magnons with other I^ m_0 (I^ e_0).] We denote the most important quantum numbers in the study of the QF and quantum revival phenomena, namely, [ {I^ m_ p,J^ m},; {I^ e_ p,I^ e_ h,J^ e}. ] Now we have all ingredients to calculate preciselyK_↓(t)to capture the essence of the dynamics of the QF and quantum revival. Without losing generality, we in this paper take the impurity wave packet as a plane wave. Precisely speaking, we also treat the sum rule∑ |A_α|^2larger than95%in our actual calculations, see TABLE. <ref>. As shown in TABLE. <ref>, the magnon-like states have the largest∑ |A_α|^2and they are the most important states in the supersonic impurity phenomenon. The sum rule of exciton-like states are relatively small. We will show the importance of these states in the QF phenomenon in Sec. S4 in this supplemental material. The states other than the highest weight states as well as the non-highest weight states are all necessary in the calculation ofK_↓(t), although they make very small contributions to the dynamics of the QF and quantum revival. The sum rule of the matrix elementB_αβdepends onα, see Eqs.(<ref>). Based on the sum rule of weights, we observe that the magnon-like states are of the most importance in the matrix elementB_αβ. We denoteN_ AMSas the number of accounted magnon-like states (AMS) in our numerical calculations. In this paper, we request the numerical sum ruleL∑_α∈ AMS∑_β|B_αβ|^2>0.97 N_ AMS. §.§ S3.4 Exciton energy in the thermodynamic limit In this section, we discuss the excitation energies of magnon- and exciton-like states in the thermodynamic limit. Building on the BA solution in the thermodynamic limit, i.e.,N→∞,L→∞andγ=cL/Nis finite. The energy of excited states is calculated by using the thermodynamic Bethe ansatz (TBA) equations <cit.>. The medium|⟩is the ground state of the Lieb-Liniger gas and the TBA equations of this model is given in <cit.>, namely, ρ_ c(k) + ρ_ c^ h(k) = 1/2π+ ∫_-k_0^k_0 a_2(k-k') ρ^c(k') d k', ε_ c(k) = k^2-μ+ ∫_-k_0^k_0 a_2(k-k') ε_ c(k') dk', whereρ_ c(k)is the linear density, ρ_ c(k) = {[ ρ_ c(k), |k|<k_0,; 0, |k|>0, ]. ρ_ c^ h(k) = {[ 0, |k|<k_0,; ρ_ c^ h(k), |k|>0, ].ε_ c(k)is the dressed energy of the charge sector and the integral kernala_n(x)=nc/[2π (x^2+n^2c^2)]. Here,k_0is the Fermi point (cut-off) of the wave numberskand it is determined by∫_-k_0^k_0ρ(k) dk=N/L.μis the chemical potential and it is determined by the conditionε_ c(k_0)=0. The TBA equations for the density and dressed energy of the spin degree of freedom are given by ρ_ s(λ) + ρ_ s^ h(λ)= ∫_-k_0^k_0 a_1(k-λ)ρ_ c(k) dk, ε_ s(λ)= - ∫_-k_0^k_0 a_1(k-λ)ε_ c(k) dk, respectively. The starting quantum numberI_0of the magnon and exciton states in Eqs. (<ref>) are near by the left Fermi point,lim_L→∞ 2π I_0/L = -k_ F, wherek_ Fis the Fermi momentum,k_ F=π N/L. The excitation energies carried by the quantum numbers near by the Fermi surface are zero. The excitation energies of magnon-like and exciton-like states associated with the quantum numbers Eqs. (<ref>) can be expressed as <cit.> Δ E_ m =μ+ ε_ c(k^ m_ p)+ε_ s(λ^ m), Δ E_ e =μ+ ε_ c(k^ e_ p)-ε_ c(k^ e_ h) +ε_ s(λ^ e), respectively. Here,k^ m,e_ p,k^ e_ handλ^ m,eare the rapidities of the corresponding quantum numbersI^ m,e_ p,I^ e_ handJ^ m,e, respectively. They can be determined by the following equations I/L = ∫_0^k [ρ^_ c(k')+ρ^ h_ c(k')] d k', J/L = ∫_0^λ [ρ^_ s(λ')+ρ^ h_ c(λ')] dλ'. § S4. QUANTUM FLUTTER We presented the expression of the impurity momentumK_↓(t)in Sec. S2. Using the determinant formula of the norms, overlap and matrix element obtained in the Sec. S3, we presented the impurity momentumK_↓(t)and momentum distributions in the main text. In order to conceive the microscopic origin of QF, we first study the frequency (energy) spectrum ofK_↓(t)K̃_↓(E) = 1/2π∫ d t e^- iE t K_↓(t). In Eqs. (<ref>),K_αα'is the momentum matrix element of the state pair{|α⟩,|α'⟩}and it has close relation with frequency spectrumK̃_↓(E)Eq. (<ref>). We can calculateK̃_↓(E)by taking the average value ofK_αα'in a small energy interval K̃_↓(E) = L/2Δ E∑_αα'^Δ E K_αα', whereK_αα'is given by Eq. (<ref>). In the above equation,Δ E≪ E_ FandE-Δ E <E_α-E_α'<E+Δ E, and the summationΣ^Δ Eis taken over all of the state pairs. As being given in Eq. (<ref>),K̃_↓(E)is the oscillation amplitude ofK_↓ (t)at the frequency (energy)E. We observe that the state pairs{|α⟩,|α'⟩}with an energy differenceE∼ E_α- E_α'essentially attribute to the oscillation nature of the impurity momentumK̃_↓(E), see Eq. (<ref>).K̃_↓(E)is plotted in FIG. <ref> in which several peaks were observed. The first peak ofK̃_↓(E)reveals the typical energy of quantum revival, which is governed by the magnon pairs with nearest neighbour quantum numbersI^ m_ p. The second peak is from the magnon pairs with next nearest neighbourI^ m_ p, and so on. We will discuss about the revival dynamics later. The numerical result ofK̃_↓(E)shows that the typical energy of QFΔ E_ QF, is nearby0.6E_ Ffor the system withγ=10. This strikingly indicates that the frequency of the QF does not dependent on the initial momentum of the impurity once it is over the intrinsic sound velocity of the medium. We also observed fromK̃_↓(E)that the QF information ofK̃_↓(E)is concealed by the peaks of magnon pairs, see FIG. <ref>. So far, we realize that magnon pairs do not really contribute the frequency of the QF. Such an oscillation feature of QF is essentially resulted in from the magnon-exciton pairs (MEPs) described by the quantum numbers Eqs. (<ref>). We observe that the quantum numberI_ pin the two states of one MEP are the same, presenting an emitted particle. Based on the conservation of the momentum, we only need to consider the quantum numbers of hole in the exciton and spin-down quantum number in magnon state {I^ e_ h,J^ m}. The other quantum numbers can be given byI^ e_ h,J^ m, namely,I^ m_ p=I^ m_ p=QL/2π+J^ mandJ^ e = I^ e_ p - QL/2π-N/2- I^ e_ h = J^ m -N/2 -I^ e_ h. We take the summation in Eq. (<ref>) over the selected MEPs in Eq. (<ref>) and denote it asK̃^ MEPs_↓(E). We plotK̃^ MEPs_↓(E)(red lines) in FIG. <ref>. It is clear seen that the QF oscillations ofK_↓(t)and the QF peaks ofK̃_↓(E∼Δ E_ QF)originate from the coherent dynamics of the MEPs. In order to deeply understand the microscopic origin of the QF, we try to find the most relevant MEPs that comprise the characteristic of the QF. In the FIG. 2 in the main text, we consider the case whenN=30andγ=10. We find that 56 MEPs with high sum rule weights which essentially contribute to the QF behavior. Further analysis shows that the MEP with quantum number{I^ e_ h,J^ m}={-1,1}is the most relevant one. In FIG. <ref>, we plotK̃^ MEPs_↓(Δ E_ QF)with the characteristic energy differenceΔ E_ QFbetween the pair statesΔ E_ QFIn the thermodynamic limit, we observe thatk^ m_ p=k^ e_ pbecause ofI^ m_ p=I^ e_ p. Thus Eqs. (<ref>) gives Δ E_ QF = | ε_ c(k^ e_ h)- ε_ s(λ^ e)+ ε_ s(λ^ m)|. In the thermodynamic limit, we further find from the BA equations thatk^ e_ h=0,λ^ m=0andλ^ e=-∞. Consequently the oscillation frequency (energy) of the QF is given by Δ E_ QF =|ε_ c(0)|-|ε_ s(0)|. Hereε_ c(0)<0,ε_ s(0)>0andε_ s(±∞)=0. It follows that the result of Eq. (6) periodicity of QF in the main text τ_ QF = 2π/|ε_ c(0)|-|ε_ c(0)|. For the strong coupling limit we have|ε_ c(0)|-|ε_ c(0)| = E_ F[1-20γ/3+ O(γ^-2)]that gives τ_ QF = 2π t_ F[1+ 20/3γ+ O(γ^-2)]. These results were confirmed in the FIG.3 in the main text. In FIG. <ref> (a), we further demonstrate the dynamics of impurity momentum for different initial momenta, ranging fromQ< k_ FtoQ> k_ F. It is showed that the saturated momentum approximately approaches to the same value, but the oscillation amplitude increases when theQbecomes larger. WhenQis small, the QF no longer appears and the saturated momentum gradually turns to zero as decreasing theQ. In view of the fast decay process ofK_↓(t), we observe that the momentum of the impurity decays faster whenQbecomes lager. WhenQis large,K_↓even reach a negative value after the faster decay. When the impurity is injected into the medium, the density of the medium in front of the faster moving impurity increases quickly so that quantum friction between the impurity and medium increases quickly. When the initial momentumQis larger than a critical value, the density of the medium in front can be compacted enough and the impurity rebounds back from it. In FIG. <ref> (b), we demonstrate the interaction effect in the faster decay process and the oscillation period. From the QF periodicity Eq. (<ref>) in the main text, we observe that the periodicityτ_ QFincreases wen the interactionγdecreases, see FIG. 3 in the main text. § S5. QUANTUM REVIVAL Now we proceed to discover a microscopic origin of the quantum revival from bothK̃_↓(t)andK̃_↓(E). The first peak ofK̃_↓(E)in FIG. <ref> shows that the frequency is the energy difference between the states in a magnon pair with nearest sequent quantum numberI^ m_ p, see our discussion in the beginning of Sec. S3. Similar to the analysis on the QF, here we further show that the first magnon pair illustrated in FIG.<ref> (a) [or FIG. <ref> (a) in the main text] determines the position of the first peak of theK̃_↓(E). This is the most prominent pair of the magnon-like states for the dynamics of the quantum revival. Such a pair of the magnon-like states show the largest weight of|A|^2, see FIG. <ref> (a). The magnon-like states are denoted by{I^ m_ p, J^ m}in Eqs. (<ref>). The two states of the most prominent pair are determined by the quantum numbers{I^ m_ p, J^ m}={I_1, J_1}and{I^ m_ p, J^ m}={I_2, J_2}, leading to the largest weight|A|^2. More precisely, I_2 = I_1 ±Δ I,   J_2 = J_1 ±Δ J, following which we have k_2 = k_1 ±Δ k,   λ_2 = λ_1 ±Δλ, namelyΔ I=Δ J=1. The energy difference of the two states in this prominent pair gives the quantum revivalΔ E_L. From Eq. (<ref>), we have Δ E_L = lim_L→∞ |ε_ c(k_2) - ε_ c(k_1) + ε_ s(λ_2) - ε_ s(λ_1)| = lim_L→∞ | Δ k ε_ c'(k_1) + Δλε'_ s(λ_1)| = lim_L→∞| Δ k/Δ IΔ Iε_ c'(k_1) + Δλ/Δ JΔ J ε'_ s(λ_1)|. Moreover, we define lim_L→∞Δ I/L Δ k = ρ_ c(k) + ρ^ h_ c(k), lim_L→∞Δ J/L ΔλΔ J = ρ_ s(k) + ρ^ h_ s(k). Then the characteristic energy of quantum revival can be given by Δ E_L = Δ p | v_ p(Q-k^*) - v_ s(k^*)|, withk^*= k_ F-2π/L J_1. HereΔ p = 2π/Land sound velocities v_ p(p)|_p=(2π/LI_1-k_ F)=ε_ c'(k_1)/2π [ρ_ c(k_1)+ρ^ h_ c(k_1)], v_ s(p)|_p=(k_ F-2π/LJ_1)=ε_ s'(λ_1)/2π [ρ_ c(λ_1)+ρ^ h_ s(λ_1)]. Consequently, we find Δ E_ L = 2π/L | v_ p(Q-k^*) - v_ s(k^*)| = 2π/L ( v_ p(Q-k^*) - v_ s(k^*)), wherev_ p(p)is always larger thanv_ s(p). This remarkably gives the period of long time revival Eq. (<ref>) in the main text, namelyτ_L = 2π /Δ E_L, τ_L = N/[v_ c(Q-k^*)-v_ s(k^*)]n = L/v_ c(Q-k^*)-v_ s(k^*). In this paper,k^*was calculated numerically based on the BA equations. We also observe thatk^*is subject to the impurity initial momentumQand interaction strengthγ, see FIG. <ref>. However,k^*dose not change obviously with respect toN. The revival dynamics of the supersonic impurity reveals the reflection of the collective excitations with respect to the finite-size effect. § S6. SUPERSONIC IMPURITY WITH A GAUSSIAN WAVE PACKET
http://arxiv.org/abs/2307.03869v1
20230708004501
Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation
[ "Aditya Sanghi", "Pradeep Kumar Jayaraman", "Arianna Rampini", "Joseph Lambourne", "Hooman Shayani", "Evan Atherton", "Saeid Asgari Taghanaki" ]
cs.CV
[ "cs.CV" ]
empty [ : Zero-Shot Sketch-to-3D Shape Generation Aditya Sanghi Pradeep Kumar Jayaraman[3] Arianna Rampini[3] Joseph Lambourne Hooman Shayani Evan Atherton Saeid Asgari Taghanaki Autodesk Research ===================================================================================================================================================================================== type=figure < g r a p h i c s > is a zero-shot sketch-to-3D generative model. Here we show how our method can generalize across voxel, implicit, and CAD representations and synthesize consistent 3D shapes from a variety of inputs ranging from casual doodles to professional sketches with different levels of ambiguity. ] Significant progress has recently been made in creative applications of large pre-trained models for downstream tasks in 3D vision, such as text-to-shape generation. This motivates our investigation of how these pre-trained models can be used effectively to generate 3D shapes from sketches, which has largely remained an open challenge due to the limited sketch-shape paired datasets and the varying level of abstraction in the sketches. We discover that conditioning a 3D generative model on the features (obtained from a frozen large pre-trained vision model) of synthetic renderings during training enables us to effectively generate 3D shapes from sketches at inference time. This suggests that the large pre-trained vision model features carry semantic signals that are resilient to domain shifts, i.e., allowing us to use only RGB renderings, but generalizing to sketches at inference time. We conduct a comprehensive set of experiments investigating different design factors and demonstrate the effectiveness of our straightforward approach for generation of multiple 3D shapes per each input sketch regardless of their level of abstraction without requiring any paired datasets during training. § INTRODUCTION Throughout history, humans have used drawings and other visual representations to communicate complex ideas, concepts, and information. As hand-drawn sketches have a high level of abstraction, they allow unskilled artists or even young children to convey semantic information about 3D objects <cit.>, while providing trained professionals with a way to quickly express important geometric and stylistic details of a 3D design. The ability to create 3D models which can capture the essence of simple doodles while accurately reproducing 3D shapes described by concept design sketches, will make 3D modelling more accessible to the general public, while allowing designers to rapidly explore many different design ideas and create virtual models that more accurately reflect the shape, size, and characteristics of real-world objects and environments. Previous studies have endeavored to employ deep learning techniques in generating 3D shapes from sketches <cit.>, yet there are several limitations that hinder their widespread application. Firstly, there is a lack of (Sketch, 3D shape) paired data at a large scale which forces most methods to be trained on synthetic datasets or data collected on only few categories. Even when a small number of categories of paired sketch-shape data has been collected <cit.>, current methods fail to generalize to different levels of abstractions in the sketches, ranging from casual doodles to detailed professional drawings. Finally, most of the present methods incorporate strong inductive biases, such as view information <cit.>, differentiable rendering <cit.> and depth estimation <cit.>, thereby constraining their generalizability across 3D representations. To overcome the challenge of limited availability of paired data, a potential solution is to use prior knowledge encoded in large pre-trained image-text models. Recently, these large pre-trained models have been successfully applied to the 3D domain in creative ways, such as guiding the optimization of differentiable 3D representations <cit.> or to generate 3D shapes from text prompts using interchangeability of text-image embeddings <cit.>, or using them for representation learning <cit.>. In this paper, we introduce a straightforward yet effective approach called , for generating 3D shapes from sketches in a zero-shot setting using pre-trained vision models. Our method is based on the idea that 3D shape rendering features derived from large-scale pre-trained models (such as CLIP <cit.> and DINOv2 <cit.>) possess robust local semantic signals that can withstand domain shifts from renderings to sketches. In , we first train a VQ-VAE to acquire shape embeddings. Following this, a masked transformer is trained to model the distribution of shape embeddings conditioned on local semantic features from an image encoder that is pre-trained and frozen. During inference, the masked transformer is conditioned on local semantic features of the sketch instead, in order to produce the 3D shape. Our findings suggest that with some architectural design choices, this straightforward method enables us to generate several 3D shapes that can generalize across sketches of varying complexities. To sum up, we make the following contributions: * We propose , the first zero-shot approach for sketch-to-3D generation, leveraging large-scale pre-trained models to outdo the need of paired sketch-3D dataset. * We experimentally show the generalization capability of our method among various datasets (<ref>) with different levels of sketch abstraction, going from simple doodles to professional sketches. * We conduct thorough experiments to examine the different components of that contribute to the success of zero-shot shape generation via sketch. § RELATED WORK 3D Generative Models. Significant progress has been made in the field of generative models for the creation of 3D shapes in various formats such as voxels <cit.>, CAD <cit.>, implicit representations <cit.>, meshes <cit.>, and point clouds <cit.>. Recent research on 3D generative models has focused primarily on the development of generative models based on VQ-VAE <cit.>, GAN<cit.>, or diffusion models <cit.>. The present study concentrates on connecting the sketch modality with 3D shapes across three different 3D representations: voxels, CAD, and implicit representation. Although our approach is based on VQ-VAE, it can be easily extended to GAN or diffusion-based generative models. 3D Zero-Shot Learning. Large pre-trained language and 2D vision models have been creatively used in several downstream 3D vision tasks. Initial works focused on using vision-text models such as CLIP <cit.> for 3D shape generation using text <cit.>, optimizing nerfs <cit.>, deforming meshes <cit.>, stylizing meshes <cit.> and animating avatars <cit.> . More recently, text-to-image models such as Stable Diffusion <cit.> and Imagen <cit.>, have been used for text-to-shape generation <cit.>, single-view reconstruction <cit.>, and adding texture to 3D shapes <cit.>. To the best of our knowledge, our work is the first to explore zero-shot 3D shape generation from sketches by leveraging a pre-trained model. 3D Shape Generation from Sketch. Several supervised learning methods have been used to generate 3D shapes from sketches. Works such as <cit.> use a neural net to estimate depth and normals from a set of viewpoints for a given sketch, which are then integrated into a 3D point cloud. <cit.> proposes to use a CNN to predict the initial shape and then refine the shape using novel viewpoints using another neural network. Another work <cit.> represent the 3D shape and its occluding contours in a joint VAE latent space during training which enables them to retrieve a sketch during inference and generate a 3D shape. Sketch2Mesh <cit.> uses an encoder-decoder architecture to represent and refine a 3D shape to match the target external contour using a differentiable render. Methods such as <cit.> employ a domain adaption network between unpaired sketch and rendered image data to boost performance on abstract hand-drawn sketches. To address the ambiguity problem of sketches, <cit.> introduces an additional encoder-decoder to extract separate view and shape sketch features, while <cit.> proposes a sketch translator module to fully exploit the spatial information in a sketch and generate suitable features for 3D shape prediction. Recently, <cit.> trains a diffusion model for generation of 3D point clouds conditioned on sketches using a multi-stage training, and fine-tuning technique. However, we take the novel approach of not training on paired shape-sketch data at all and instead rely on the robustness of the local semantic features from a frozen large pre-trained image encoder such as CLIP. § METHOD Our approach strives to generate 3D shapes from sketches of different complexities, solely employing synthetic renderings, and without the requirement of a paired dataset of sketches and shapes. The training data consists of 3D shapes, each denoted by 𝐒, which can be represented as a voxel grid, implicit (e.g. occupancy), or CAD, and their 𝐫 multi-view renderings (𝐈^1:r). Our approach involves two training stages. In the first stage, the shapes are transformed into discrete sequences of indices (shape embeddings), denoted by 𝐙, using a discrete auto-encoder framework <cit.>. In the second stage, the distribution of these indices is modeled using a transformer-based generative model that is conditioned on features of shape renderings obtained from a frozen pre-trained model. These shape rendering features are a grid of local features from the frozen pre-trained model which are converted into a sequence of local features then conditioned to the transformer through a cross-attention mechanism. During inference, we use an iterative decoding scheme <cit.> to generate the shape indices iteratively based on features of the sketch. Once the shape indices are generated, we can use the decoder of the auto-encoder to generate the 3D shape. The overall process is illustrated in Figure <ref> . §.§ Stage 1: Training Discrete Auto-encoder In the first stage, we use an auto-encoder framework to capture the shape distribution into a compressed sequence of discrete indices (shape embeddings) among various modalities. To achieve this, we opt for the Vector Quantized Variational Auto-encoder (VQ-VAE) architecture <cit.> which efficiently models the 3D shape in a compressed latent space, circumventing posterior collapse and enabling the generation of high-quality 3D shapes. The dataset of 3D shapes 𝐒, are transformed using an encoder, E(.), into a sequence of discrete indices 𝐙, pointing to a shape dictionary, which their distributions are then modeled in stage 2 using a transformer-based generative model. This is shown below: 𝐙 = VQ(E(𝐒)), 𝐒^' = D(𝐙) The shape 𝐒^' is then generated from 𝐙 using a decoder, D(.), with a reconstruction loss L_rec(S, S^'). We also use the commitment loss <cit.> to encourage encoder output E(.) commits to an embedding in the shape dictionary, and exponential moving average strategy <cit.> to encourage dictionary enteries to gradually be pulled toward the encoded features. When dealing with voxel representation, we leverage a 3D convolution based on the ResNet architecture <cit.> for both the encoder and decoder network. Whereas with implicit representation, we rely on a ResNet-based encoder and an up-sampling process for the decoder that generates a higher resolution volume, which is then queried locally to obtain the final occupancy <cit.>. In the case of CAD representation, we use the SkexGen VQ-VAE architecture <cit.>. More details of the architectures are provided in the supplementary material. §.§ Stage 2: Masked Transformer The goal of stage 2 is to train a prior model which can effectively generate shape indices conditioned on a sketch at inference time. We achieve this by modelling the sequence of discrete indices (shape embedding 𝐙), produced from stage 1, using a conditional generative model. We use a bi-directional transformer <cit.> based network which is conditioned on the features of the synthetic 3D renderings using a cross-attention mechanism. During training, we randomly mask a fraction of shape indices with a special mask token <cit.> to produce 𝐙^msk. The training objective then becomes how to fully unmask the masked indices using the help of the provided conditional information. The training objective is to minimize: L_mask= - 𝔼_Z,C ∈ D [log p(𝐙|𝐙^msk, 𝐂)] Here, 𝐂 represents the conditional information from a given shape 𝐒 which are obtained from the multi-view image renderings of the 3D shape. At each iteration, we randomly sample a view to render an image of the 3D shape, which is then converted to local features using a locked pre-trained model. The choice of pre-trained model is an important design criteria which we investigate thoroughly in Section <ref>, and find that using large models trained on diverse data produces the most robust semantic local features which allow domain shift from synthetic renderings to sketches. The local features sequence can be obtained from different parts of the pre-trained network, which we investigate in Section <ref>. Our findings indicate that utilizing the feature grid output of the deeper layers in the pre-trained models yields better results. This is because deeper layers generate more semantic features, and the grid structure of the feature preserves its local properties. We convert this grid into a sequence using a mapping network comprising of several MLP layers. Finally, we take features obtained and add learnable positional encoding before applying cross-attention with the shape indices' features at each transformer layer. The choice of conditioning is also an important design choice which we discuss in Section <ref>. Additionally, we replace the local features sequence with a null embedding sequence 5% of the time to allow for classifier-free guidance during inference. §.§ Inference During the generation phase, we first convert the sketch into a sequence of local features using the same frozen pre-trained model utilized during training. These local features are semantically robust and serve as the conditioning query for the transformer. We employ an iterative decoding scheme with a cosine schedule, similar to the one proposed in Mask-GIT <cit.>. The process begins with a completely masked set of indices, which are gradually unmasked in parallel using the conditional information provided by the local features sequence from the sketch. At each time step, the transformer predicts the complete unmasked shape sequence, of which a specific fraction of the highest confidence masked tokens are accepted. These selected tokens are designated as unmasked for the remaining steps, while the rest of the tokens are reset to masked, except for the already unmasked tokens from the previous steps. For each time step, we also apply classifier-free guidance <cit.> with a guidance scale of 3. This process continues until all the tokens are unmasked. Finally, the completely unmasked tokens are converted into the 3D object using the shape decoder trained in stage 1. It is worth noting that we can restart the same process multiple times to generate different 3D shapes for the same sketch query. § EXPERIMENTS In this section, we present the results of our experiments evaluating the accuracy and fidelity of the generated output produced by our model. We conducted each experiment three times for each metric and reported the average result for each. The experimental setup details are provided in the supplementary material with additional results that may be of interest. Training Dataset. Our experimentation utilizes two subsets of the ShapeNet(v2) dataset <cit.>. The first subset, ShapeNet13, consists of 13 categories from ShapeNet, which were also employed in previous studies <cit.>. In line with Sketch2Model <cit.>, we adopt the same train/val/test partition. The second subset, ShapeNet55, includes all 55 categories of ShapeNet and we follow the same split as <cit.>. We use the DeepCAD <cit.> dataset to train our CAD model. Evaluation Sketch Dataset. One advantage of our method is that it's not trained on paired (shape, sketch) datasets. Therefore, to comprehensively evaluate its performance, we tested it on various sketch datasets that range from professional to non-expert sketches. Specifically, we utilized the ShapeNet-Sketch dataset <cit.>, which comprises 1300 free-hand sketches across ShapeNet13. In addition, we employed the ImageNet-Sketch dataset <cit.>, which contains 50 sketch images for 1000 ImageNet classes obtained from Google, encompassing a range of professional to non-expert sketches. Moreover, we utilized the TU-Berlin Sketch dataset <cit.>, which includes 20,000 non-expert sketches of 250 object categories. Lastly, QuickDraw Dataset <cit.> is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw! <cit.>. ImageNet-Sketch, TU-Berlin Sketch, and QuickDraw datasets also lack ground-truth 3D models, and we only utilized the categories of ShapeNet for these datasets. To evaluate our CAD model we use synthetic edge map sketches but don't train the model using edge maps as augmentation. Evaluation Metrics. To evaluate our method on different sketch datasets we use two metrics: classification accuracy and human evaluation which are outlined below. 0em * Classifier Accuracy. As we are dealing with sketch data that lacks ground-truth 3D models, we use the Accuracy (Acc) metric to ensure that the generated shape for a given sketch corresponds to its category. To achieve this, we employ a pre-trained shape classifier, as implemented in <cit.>. We use this metric for all datasets: ImageNet-Sketch <cit.>, TU-Berlin <cit.>, ShapeNet-Sketch <cit.>, and QuickDraw <cit.>. We refer to this metric as IS-Acc, TU-Acc, SS-Acc, and QD-Acc, respectively. As our method can generate multiple shape per sketch query, we report the mean across 5 sampled shapes for a given sketch query. * Human Perceptual Evaluation. We also use Amazon SageMaker Ground Truth and crowd workers from the Mechanical Turk workforce  <cit.> to evaluate how well our generated 3D models preserve important geometric and stylistic details from the sketches. §.§ Qualitative Results In <ref>, we visualize sample generated 3D shapes in different representations such as voxel, implicit, and CAD from sketches of different domains. As shown, our method performs reasonably well on different types of sketches (from simple to professional drawings), particularly when there is ambiguity (such as the view angle of drawings) given the nature of 2D sketches. §.§ Human Perceptual Evaluation In addition to generating shapes in the same broad object category as abstract hand drawn sketches, our method is also able to incorporate geometric and stylistic details from a sketch or concept design into the final 3D model. To demonstrate this quantitatively, we run a human perceptual evaluation using Amazon SageMaker Ground Truth and crowd workers from the Mechanical Turk workforce  <cit.>. We evaluate 691 generated models, conditioned on sketches from TU-Berlin <cit.>, ShapeNet-Sketch <cit.>, ImageNet-Sketch <cit.> and QuickDraw <cit.>. The human evaluation is posed as a two-alternative forced choice study <cit.>. The crowd workers are shown images with a sketch on the left hand side and two images of generated 3D models on the right hand side. An example is shown in <ref>. One of the generated models was conditioned on the sketch shown, while the other was conditioned on a randomly selected sketch from the same object category. The crowd workers are asked the question “Which of the 3D models on the right hand side best matches the sketch on the left hand side?". The study is designed to measure the extent to which humans perceive our generated 3D models as preserving the shape and stylistic details presented in the sketch, as opposed to simply creating a model from the same object category. We show each image to 7 independent crowd workers and count the number of images for which 4 or more of them correctly identify the 3D model which was conditioned on the sketch. The results are shown in <ref>. On average 71.1% of our generated 3D models are correctly identified by a majority of the crowd workers. We note that the sketches in TU-Berlin and ShapeNet-Sketch give rise to generations which were easier for the crowd workers to identify, with 74.9% and 73.1% being selected correctly. While these sketches often have a high level of abstraction, they communicate enough detail about the shape for our method to create distinctive 3D models which humans can identify. While ImageNet-Sketch contains superior artwork, often with shading, shadows and other cues to the 3D nature of the shapes, many of the pictures contain full scenes with backgrounds and additional superfluous details. This makes the generation of single objects more challenging, which is reflected by the fact that only 68.1% are correctly identified by the crowd workers. We note qualitatively that in cases where shaded sketches do not contain backgrounds or additional clutter the generated results look better, indicating the utility of our method for quickly generating 3D models from concept designs. The sketches in the QuickDraw dataset are sourced from the from the online game Quick, Draw! <cit.>, in which contributors are asked to drawn a shape in less than 20 seconds. QuickDraw is the most abstract and noisy sketch dataset, with many of the sketches being drawn with a computer mouse. While our method typically generates 3D shapes of the correct category, only 67.9% of the generations are correctly identified by the crowd workers. §.§ Comparison with Supervised Models As there is currently a lack of zero-shot methodologies for generating shapes from sketches, we compared our results to those of a supervised approach called Sketch2Model <cit.>, which was trained on a dataset of paired sketch-3D shapes. We evaluated both methods using our classifier accuracy metric, and the results are presented in <ref>. Our model was not exposed to any sketch-3D pairings during training, but it displays superior generation capabilities compared to Sketch2Model across different datasets. We attribute this difference in performance to several factors. Firstly, we believe that Sketch2Model may be more effective for single-category training rather than for the 13 categories in the ShapeNet dataset. Additionally, because Sketch2Model is a supervised method, it was not exposed to out-of-distribution sketches during training, which may have caused its performance to deteriorate. We provide further details and qualitative comparison with Sketch2Model and other supervised methods in the supplementary material. §.§ Investigating Pre-Trained Models This section involves an extensive study of several pre-trained models that are open-sourced and trained on different datasets. The results are present in Table <ref>. There are 3 major things we investigate through this experiment. First, we investigate the importance of utilizing local grid features of pre-trained models. Typically, pre-trained models possess a global projection vector that is employed for downstream tasks like classification. We compare the efficacy of conditioning our generative model with the global projection vector (row 1) versus the local grid features (row 2). Our findings demonstrate that leveraging local grid features yields better performance compared to the global projection vector for most of the datasets. Furthermore, even from a visual standpoint, we observe that local grid features preserve more local details. It is worth noting that these accuracies are further improved by utilizing classifier-free guidance (CFG), as illustrated in row 3. Next, we investigate the role of size of pre-trained models and find that increasing the size of the same class of pre-trained model, despite being trained on the same data, results in better zero-shot performance. This phenomenon is evident in the case of the ViT-based <cit.> CLIP model, where upgrading from the B-32 model to the L-14 model yields a significant improvement in performance. This trend is also observed in the ResNet-based <cit.> models. Interestingly, it is worth mentioning that the ResNet-based <cit.> models perform worse than their corresponding ViT-based <cit.> CLIP models. This could be attributed to the ResNet models' emphasis on high-frequency, textural features <cit.>. Finally, we explore how different datasets impact the training of these models. Our findings indicate that the model's performance remains comparable when trained on extensive datasets such as LAION-2B <cit.>, DINOv2 Dataset <cit.> or OpenAI internal dataset <cit.>. However, when we reduce the dataset size significantly, such as in the case of the masked autoencoder <cit.> trained on 400 times less data from ImageNet <cit.>, its performance significantly declines. Despite being trained on the reconstruction objective, we believe that the masked autoencoder's performance drop is primarily due to the significantly reduced dataset size, as it still performs reasonably well on this task. Additionally, it is important to highlight that language supervision is unnecessary to acquire resilient features from extensive pre-trained models, as demonstrated by the outcomes of DINOv2. §.§ Accuracy across Different Layers In this experiment, we explore the optimal layer of the vision transformer (L-14 model) from which we extract the local conditioning features. Table <ref> summarizes our findings. We note that as we delve deeper into the vision transformer architecture, the features extracted from the deeper layers contain more significant semantic information leading to higher accuracy. Moreover, this indicates that the model maintains the positional correlation between patches instead of treating them as global information repositories as visually we can see local semantic generation. §.§ Design Choices for Conditioning Table <ref> presents our investigation into the impact of the mapping network's size and the attention mechanism used for conditioning the image features to the transformer. Our results show that incorporating a mapping layer does enhance the model's performance, with the optimal number of MLP layers being two. Furthermore, our findings suggest that cross-attention with a learnable positional embedding is the most effective conditioning mechanism, as evidenced by the deteriorated performance on removing positional embedding or using self attention as shown in the last two rows of the table. §.§ Effect of Augmentation In our final investigation, we explore whether the addition of data augmentation improves the accuracy of shape generation across datasets. The results are summarized in Table <ref>. We make two noteworthy observations. Firstly, even without data augmentation, our method performs relatively well, indicating the robustness of pre-trained models. Secondly, different types of augmentations have a more significant impact on certain datasets than others. For instance, affine transformation significantly enhances the performance of QuickDraw and ImageNet Sketch, while canny edge augmentation is more effective for the ShapeNet Sketch dataset. Consequently, we decide to train a network with all augmentations and find that, on balance across datasets, it performs the best. § CONCLUSION In this paper, we demonstrate how a 3D generative model conditioned on local features from a pre-trained large-scale image model such as CLIP can be used to generate 3D shapes from sketches. We show how this method can generate multiple shapes for different abstraction of sketches and can be applied to multiple 3D representations. Future work will involve training on much larger and diverse 3D shape datasets and consequently testing on different styles of sketches and levels of details. ieee_fullname [ Supplementary Material ] § HUMAN PERCEPTUAL EVALUATION In Subsection 4.2 (main paper) we show the results of the human perceptual study broken down according to the dataset which the target sketch came from. In addition, the results can be broken down based on object category of the target sketch, as shown in <ref>. We see a wide range of performance across the different object categories, with “lamps" being correctly identified 89.1% of the time, while phones are identified just 52.5% of the time, little better than random. Categories which perform well, such as chair, table and sofa, tend to have distinctive shapes which are easy to communicate with sketches. Categories like airplane and gun produce good models, but these are not distinctive enough for the human evaluators to distinguish the correct 3D model from a random model of in the same category. Lower performance on these categories may also relate to the difficultly of drawing objects of these types. We believe having texture can further improve the human perceptual results. As each generated model is rated by 7 individual crowd workers, we can count the number of raters who correctly identified the generated model, giving us a “shape recognizably score" from 0 to 7. In <ref> we show examples from selected categories with the highest and lowest “shape recognizably scores". For “airplane" category the least recognizable model appears to be in the wrong category, due to the unusual orientation of the sketch. The most and least recognizable sketch in the “bench" category both come from the Imagenet-Sketch dataset. The sketch for the most recognizable model contains a single bench while the sketch for the least recognizable model also contains background elements like trees and a lampost. For the “gun" category the most recognizable model is actually from a sketch which looks nothing like a gun. The least recognizable model is a generic gun which does not closely follow the shape of the sketch. The Figure shows how the human evaluation is measure the ability of our method to generate distinctive shapes reflecting the geometry of the sketches as well as general generation quality. § COMPARISON WITH SUPERVISED METHODS §.§ Quantitative comparison We evaluate the quality of generated shapes on the ShapeNet-Sketch dataset <cit.> using Intersection over Union (IOU) with 32^3 voxel shapes, as shown in <cit.>. This is the only dataset among the four we assessed that includes ground truth 3D voxels. We compare our results to those of other supervised methods presented in Table <ref>, exactly as in <cit.>. Our generative model generates 5 shapes based on a given sketch query in the ShapeNet-Sketch dataset and averages the IOU results. Although our method is not trained on any sketch data, it outperforms the supervised baseline. This indicates that the pre-trained model's learned features are effective in enabling our method to generate 3D shapes using sketches in a zero-shot manner. §.§ Qualitative comparison We additionally provide a qualitative comparison with Sketch2Model <cit.> and SketchSampler <cit.>. For this comparison, we considered diverse sketches with different levels of abstraction in the same classes of ShapeNet from four datasets: TU-Berlin <cit.>, ShapeNet-Sketch <cit.>, ImageNet-Sketch <cit.>, and QuickDraw <cit.>. Implementation details can be found in <ref>. Results are in <ref>. We can see that Sketch2Model reconstructed meshes often grasp the overall sketch shape, but they appear too smooth and lack geometric details. This method was originally intended for a single category scenario, as presented in the paper. However, this is often unpractical. Similarly, SketchSampler fails to generalize to abstract or out-of-distribution sketches. The resulting point clouds present artifacts and outliers, especially in the direction of the point of view of the sketch (shapes proportion are only preserved when the point clouds are seen from this point of view). Unlike our approach, SketchSampler is designed for professional sketches only, with reliable shapes and fine-grained details. Thus, it cannot deal with sketches with significant deformation or only expressing conceptual ideas, like the ones in QuickDraw <cit.>. § ARCHITECTURE AND EXPERIMENT DETAILS Training Details. We use the Adam Optimizer <cit.> with a fixed learning rate of 1e-4 for training. The network is trained for 300 epochs during Stage 1 and for 250 epochs during Stage 2. We do not employ any learning rate scheduler during training. We train the 32^3 voxel model solely on the ShapeNet13 dataset, while the Implicit model is trained on the ShapeNet55 subset. The CAD model is trained on the DeepCAD dataset <cit.>. This is done to demonstrate the versatility and adaptability of our method to different datasets. Stage 1 Details. For both the Implicit VQ-VAE and 32^3 VQ-VAE we use a codebook size of 512, a grid size of 8^3 and embedding dimensions of size 64. We employ the ResNet architecture for the 32^3 VQ-VAE, for both the encoder and decoder. In the case of Implict VQ-VAE, we use the ResNet architecture for the encoder whereas we use a decoder that produces a higher resolution volume, which is then queried locally to obtain the final occupancy <cit.>. The pretrained VQ-VAE from SkexGen <cit.> is used for the CAD representation which is composed of three Transformer encoders and decoders for the topology, geometry and extrusions of a CAD model. The models output 4+2+4=10 codes, with a total codebook size of 1000. Stage 2 Details. For Stage 2, we utilize a bidirectional Transformer with 8 attention blocks, 8 attention heads, and a token size of 256. We use 24 renderings <cit.> for both the ShapeNet13 and ShapeNet55 experiments. During inference, we run the Transformer for 15 steps with classifier-free guidance, and the scale parameter is set to 3. The CLIP ViT-L/14 model is employed in all experiments, except in Table 3 of the main paper, where we conduct an ablation study over different pre-trained models. For all experiments, except Table 4, we incorporate cross-attention with learnable positional embedding and a mapping network consisting of 2 layers of MLP. We do not apply any augmentation for the quantitative experiments, except for the results presented in Table 6 and Table 2 of the main paper. For the CAD results, we used a CLIP ViT-B/32 model. Sketch2Model. The authors of Sketch2Model released ShapeNet-Synthetic as the training dataset <cit.>, which consists of synthetic sketches of objects from 13 categories from ShapeNet. These objects have been rendered from 20 different views. For training Sketch2Model, we used the official implementation provided in <cit.>, along with the recommended hyperparameters. This implementation uses a step-type learning rate policy, beginning from 1e-4 and decreasing by 0.3 every 800 epochs, and trains for 2000 epochs with the Adam optimizer. We trained the model on all 13 categories of ShapeNet-Synthetic using the same training/test split of the original paper. SketchSampler. This method employs as training dataset Synthetic-LineDrawing <cit.>, a paired sketch-3D dataset based on 3D models from ShapeNet. In our experiments, we used the official implementation, cited in the original paper <cit.>. In particular, we used the pre-trained model released by the authors, and pre-processed the input sketches to be in the same format of Synthetic-LineDrawing ones. § COMPARISON WITH POINT·E Furthermore, we conducted a comparison between our work and Point·E <cit.>, as illustrated in the table provided below (Row 1). The results clearly demonstrate the superior performance of our method, indicating the merit of our design choices. § NATURAL IMAGES RESULTS We explored the applicability of our method to natural images, as it is robust to domain shifts between renderings and sketches. The outcomes are depicted in Figure <ref>, indicating the efficacy of our method in generating natural images, including those with backgrounds. We believe that this discovery would be of interest to the Single View Reconstruction community. § FAILURE CASES This section demonstrates the limitations of our method, as illustrated in Figure <ref>. The outcomes reveal that our method encounters difficulties in generalizing to shapes that are beyond those present in ShapeNet13, as depicted in the first row. Furthermore, our method also faces challenges when dealing with sketches that depict multiple shapes, as shown in the second row. Lastly, our method experiences difficulties in accurately reproducing the local details of shapes, which we consider to be an intriguing direction for future work. § SOCIETAL IMPACT The societal impact of Sketch-to-3D technology can be significant in various fields such as architecture, product design, gaming, and entertainment. With the help of Sketch-to-3D technology, designers and architects can create realistic 3D models quickly and efficiently, reducing the overall time and cost of the design process. However, it is important to note that the widespread adoption of Sketch-to-3D technology could also lead to job displacement in certain industries. As with any technological advancement, it is crucial to consider the potential social and economic impacts and work towards ensuring a smooth transition for workers and communities affected by such changes. § FUTURE WORK We aim to concentrate on expanding this method to handle bigger 3D datasets for our future work. Additionally, we think that enhancing the Stage 1 VQ-VAE can aid in preserving the local features of the 3D shape. Lastly, an intriguing avenue to explore would be to combine sketch with text conditioning, resulting in a more adaptable generative model. § MORE QUALITATIVE RESULTS Additional results are provided in Figure <ref> and Figure <ref>.
http://arxiv.org/abs/2307.04100v1
20230709052546
Visible and infrared self-supervised fusion trained on a single example
[ "Nati Ofir" ]
cs.CV
[ "cs.CV" ]
Visible and infrared self-supervised fusion trained on a single example Nati Ofir August 12, 2023 ======================================================================= This paper addresses the problem of visible (RGB) to Near-Infrared (NIR) image fusion. Multispectral imaging is an important task relevant to image processing and computer vision, even more, since the development of the RGBT sensor. While the visible image sees color and suffers from noise, haze, and clouds, the NIR channel captures a clearer picture and it is significantly required by applications such as dehazing or object detection. The proposed approach fuses these two aligned channels by training a Convolutional-Neural-Network (CNN) by a Self-Supervised-Learning (SSL) on a single example. For each such pair, RGB and IR, the network is trained for seconds to deduce the final fusion. The SSL is based on Sturcture-of-Similarity (SSIM) loss combined with Edge-Preservation (EP) loss. The labels for the SSL are the input channels themselves. This fusion preserves the relevant detail of each spectral channel while not based on a heavy training process. In the experiments section, the proposed approach achieves better qualitative and quantitative multispectral fusion results with respect to other recent methods, that are not based on large dataset training. § INTRODUCTION The problem of visible-to-infrared image fusion is a well-studied area with a plethora of works. Even though many solutions have been developed, there is still a need for an Artificial-Intelligence (AI) approach that is based on Deep-learning (DL), however, does not require heavy pre-training and large dataset acquiring to carry a single multispectral fusion. This paper introduces a DL method that works on a single example and produces a fusion result in an SSL way such that no manual human labeling is required. Given this solution, every multispectral camera can be extended with a fusion channel such that the observer will be able to see the details captured by each spectrum without flickering between the different images. While the visible RGB (0.4-0.7μ m) sees color information, the NIR (0.8-2.5μ m) sees beyond haze and fog and suffers less from the noise of low-light imaging. Since each spectral channel captures different information about the scene their fusion is informative and relevant for a person observing the camera. While most of the DL fusion approaches, such as attention based <cit.>, required a timely training phase, the proposed method is training CNN weights for each input image for forty seconds on Nvidia Geforce GTX 3060 GPU. In addition, while classic image fusion, such as <cit.> is relatively fast to compute, it is proved in the experiments of this paper that they are less preserving the input detail according to several quantitative measurements. For example, Figure <ref> demonstrates the proposed method results of RGB to NIR fusion on a country example of the dataset <cit.>. These results maintain to combine the information of both inputs, it can be seen that the far mountains, seen only in infrared, are emphasized by the computed CNN in the final fusion. Moreover, the information on the color of the RGB sensor is preserved in the fusion. Even though this method is based on learned AI CNN, the outcome seems naturally real and without special artifacts. Ofently, the input channels are not aligned with each other, and multispectral image registration is required as a preprocessing step. As the nature of the dataset <cit.> contains small misalignment, this paper proposes simple solutions for that problem. The first approach is to align the images in advance by methods tailored by multispectral imaging like DL based <cit.> and traditional computer vision bases <cit.>. The second solution, that can be integrated into the proposed CNN architecture is to learn a Spatial-Transformation-Network (STN) <cit.> in a holistically end-to-end method to compute the final aligned fusion results. As this example shows, the CNN output does not suffer from channel misregistrations. This manuscript is organized as follows. In Section <ref> the previous methods for image fusion are covered. Next, in Section <ref> the proposed approach is explained in detail including the CNN architecture, training algorithm, and loss functions. Then, Section <ref> illustrate the fusion performance with respect to other methods that are not dependent on the time-consuming training phase. Finally, this paper is concluded in Section <ref>. § PREVIOUS WORK Image fusion is a classic problem of computer vision. Early methods utilized signal characteristics for fusion such as Wavelets-based method <cit.>. Laplacian pyramid blending was used to overcome multi-focus image capturing <cit.> for example. Statistical features of the input images can contribute to their fusion such as Principal-Component-Analysis (PCA) <cit.>. Fusion can be carried out according to spectral analysis of the images as was introduced in <cit.>. A recent approach utilized superpixels <cit.> segmentation for a content-based multispectral fusion<cit.>. The DL revolution produced many related works with state-of-the-art (SOTA) blending performances like <cit.>. Visible and infrared fusion is using DL to enhance object detection <cit.>. The proposed method is utilizing DL techniques and lite-CNN-architecture, however, does not depend on heavy training processes and large datasets contrary to the most recent approaches. The idea of training a CNN on a single example has shown significant potential in super-resolution <cit.> and image-generation by Generative-Adverserial-Network (GAN)<cit.>. This work is the first to utilize single-image training for multispectral image fusion. If the input spectral channels are not geometrically aligned, an apriori step of multispectral registration is required. A single channel registration can be carried out by engineered feature descriptors like Scale-Invariant-Feature-Transform (SIFT) <cit.>. Unfortunately, regular alignment methods usually fail in the multispectral scenario, and therefore a tailored approach to this case is needed. A descriptor that is invariant to different spectra can be based on edge detection <cit.>, like Canny <cit.>, however, this method has limitations on the geometric transformation level. An additional method is to apply for a Mutual-Information based registration <cit.>. MI usually solves translation, or small optical flow fields. Recent methods utilize DL to compute a spectra-invariant descriptor like <cit.>, unfortunately, this method is also geometrically limited. Another DL method, learned a hybrid network for multispectral key points matching <cit.>, it shows better accuracy, however, depends on a training dataset that is manually labeled. The dataset that the proposed methods fuse <cit.> contains small misalignments that are usually solved holistically by the learned CNN. The geometric correction also can be trained using Spatial-Transformation-Network (STN) <cit.>, that computed a geometric transformation by end-to-end learning. In conclusion, multispectral image alignment is a challenging problem that is hardly solved, however, less relevant since the development of RGBT cameras <cit.>. Self-Supervised-Learning (SSL)is a relevant field, enabling AI and DL to be independent of human labeling. A common SSL approach is utilizing contrastive learning <cit.>. In this paper, the proposed method uses the input spectral channels as a label for their fusion, based on Structure-of-Similarity-Measuare (SSIM) <cit.> and Edge-Preservation (EP) loss <cit.>. As a whole, this study introduces a holistic solution for visible-to-infrared fusion and registration based on SSL. § THE PROPOSED MULTISPECTRAL FUSION This Section will introduce the proposed method to fuse visible and infrared multispectral images, by training a fusion CNN on a single example for several seconds using self-supervised loss functions. §.§ Network architecture The proposed CNN architecture for image fusion gets two channels of any image dimension and outputs a single channel with the same height and width as the input. A typical image in the dataset used to evaluate the method <cit.> is 900x768 pixels. The compact fusion network contains four convolutions of kernel 3x3, the first three are followed by ReLU(x) = max(x,0) activation, and the final output-convolution is followed by Sigmod(x) = e^x/1+e^x. The architecture contains two skip connections that are based on numeric addition. Before the feed-forward CNN, an STN is applied to align the spectral channel. In addition, a UNet <cit.> with Resnet18 backbone <cit.> is applied in parallel to the feed-forward CNN, to get a smooth fusion with semantic information. For more graphic details see Figure <ref>, for the whole CNN parameters see Table <ref>. The total number of parameters is ≈ 4M, such that the CNN is versatile and can be trained fastly. In the experiments Section <ref>, an ablation study is learned on this architecture, and each part is assigned a contribution score to the final fusion result. Figure <ref> shows a compact version of the proposed architecture, such that according to the ablation study done in this paper, it has main contributions to the final fusion results. §.§ Training algorithm To train the method CNN, an algorithm training loop is introduced. See Algorithm <ref> for the whole fusion algorithm containing mainly the self-supervised training loop. The RGB input image is converted to the GRAY, and then the training computed the CNN weights to fuse a specific pair of NIR and GRAY images. During the training the network weight is updated due to a combination of SSIM <cit.> and Edge Preservation <cit.> losses. Finally, after the training loop, the fusion is computed and it is used to distort the RGB channels to contain the fusion result. The number of epochs that were found to be required for high-quality fusion is three hundred. In addition, the CNN is initialized with random weights. §.§ Loss functions The loss function that is used to train the CNN are SSIM and Edge Preservation each self-labeled with the input images. Given two input images I_1, I_2 the SSIM, correlated to the human visual system is defined by: (2μ_1μ_2+c_1)(2σ_12+c_2)/(μ_1^2+μ_2^2+c1)(σ_1^2+σ_2^2+c_2), where μ is the mean of each image, σ is the standart deviation and σ_12 is the joint covariance. This similarity function is widely used for understanding the perception of similar images, and it has its differentiable loss definition <cit.>. Regarding the Edge-Preservation loss (EP), it is a regular reconstruction loss, applied after image gradient detection. EP(I_1,I_2) = ||∇ I_1(x)-∇ I_2(x)||_2^2. In the experiment Section <ref> it is shown that using the EP loss in addition to SSIM improves the quantitative fusion results of the proposed method. §.§ Multispectral registration The dataset of <cit.> contains small misalignments between the spectral channels that are basically holistically aligned by the various convolution of the proposed CNN architecture. Even though, if the miss-registration is significant there are approaches to solve it and then fuse with the proposed self-supervised approach. The first solution is based on Spatial-Transformation-Networks (STN) <cit.>. The idea is to apply an STN to the NIR channel at the beginning of the CNN and to train the whole network by the proposed method. If the miss-registration is dramatically significant, then matching is required like the algorithm of <cit.>. § RESULTS The proposed method evaluation is done both quantitatively and qualitatively. For the evaluation, the multispectral dataset <cit.> contains 954 pairs of NIR and RGB, divided into different categories such as country, mountain, urban, and street. The following experiments show that the proposed method produces better results than alternative fast methods for image fusion, in terms of SSIM, Canny <cit.> edge preservation, and statistic correlation. The proposed approach is compared to the latest SuperPixel <cit.>, PCA Fusion <cit.>, and Spectral Fusion <cit.>. In addition, the contribution of the edge preservation loss itself is emphasized. Figure <ref> demonstrates the proposed method visual results, where fusing RGB and IR images from the dataset of <cit.>. It can be seen, that this approach manages to fuse smoothly images from different categories while maintaining the relevant information for each spectral channel. In addition, Figure <ref>, compares the proposed algorithm for fusion to the recent SuperPixel <cit.> method, it shows that the proposed approach picks the relevant information of each spectral channel even though it is holistic and trained in an end-to-end fashion. The SuperPixel method is based on classic computer vision and is engineered to produce such results, the proposed algorithm achieves similar quality of image fusion, while being based on compact short DL CNN training per example. Table <ref> compares the edge preservation of the method when training with and without EP loss. For input images I_1,I_2, their fusion F and their corresponding Canny <cit.> binary-edges C_1, C_2, C_F this loss is defined by: EP(I_1,I_2) = 0.5∑_i∑_x C_i(x) · C_F(x)/∑_x C_i(x). It is demonstrated in the table that the EP loss is crucial for preserving the edge maps in the proposed self-supervised fusion. In addition, Table <ref> shows that the self-supervised fusion achieves the highest SSIM fusion score, where: SSIM(I_1,I_2, F) = 0.5SSIM(I_1, F)+0.5SSIM(I_2,F). This is another proof of the quality of the proposed algorithm. Moreover, Table <ref> depicts similar result for the correlation metric: corr(I_1,I_2, F) = 0.5corr(I_1, F)+0.5coor(I_2,F). In addition, Table <ref> demonstrates in the ablation dataset of the proposed CNN architecture, it shows the fusion SSIM score for every CNN alternative: Compact, Compact+UNet, and Compact+Unet+STN. It can be shown that even a compact CNN can fuse the input images in high quality, however, adding extra parts to the architecture improve the general performance of the self-supervised training. Overall, this experiment section proves that the self-supervised fusion method trained on a single example achieves a high quality of image fusion with respect to competitive fusion alternatives. § CONCLUSIONS In conclusion, this paper introduces a novel approach for infrared and visible image fusion based on self-supervised short CNN training for a single example pair. The paper presented this method's technical details including CNN architecture, training algorithm, and the relevant loss functions. In addition, it was proved in the experiments of the paper that the proposed method gets the best results both quantitatively and qualitatively over competitive methods for fast multispectral fusion. Overall, this manuscript introduces a relevant approach that can be incorporated easily into multi-sensor cameras and systems. ieee_fullname
http://arxiv.org/abs/2307.04889v2
20230710202446
Critical behavior of cascading failures in overloaded networks
[ "Ignacio A. Perez", "Dana Ben Porath", "Cristian E. La Rocca", "Lidia A. Braunstein", "Shlomo Havlin" ]
physics.soc-ph
[ "physics.soc-ph" ]
Correspondence: [email protected] Instituto de Investigaciones Físicas de Mar del Plata (IFIMAR)-Departamento de Física, FCEyN, Universidad Nacional de Mar del Plata-CONICET, Deán Funes 3350, (7600) Mar del Plata, Argentina Formerly Dana Vaknin Faculty of Engineering and the Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan, Israel Instituto de Investigaciones Físicas de Mar del Plata (IFIMAR)-Departamento de Física, FCEyN, Universidad Nacional de Mar del Plata-CONICET, Deán Funes 3350, (7600) Mar del Plata, Argentina Instituto de Investigaciones Físicas de Mar del Plata (IFIMAR)-Departamento de Física, FCEyN, Universidad Nacional de Mar del Plata-CONICET, Deán Funes 3350, (7600) Mar del Plata, Argentina Physics Department, Boston University, 590 Commonwealth Ave., Boston, Massachussets 02215, USA Department of Physics, Bar-Ilan University, Ramat-Gan 52900, Israel Physics Department, Boston University, 590 Commonwealth Ave., Boston, Massachussets 02215, USA While network abrupt breakdowns due to overloads and cascading failures have been studied extensively, the critical exponents and the universality class of such phase transitions have not been discussed. Here we study breakdowns triggered by failures of links and overloads in networks with a spatial characteristic link-length ζ. Our results indicate that this abrupt transition has features and critical exponents similar to those of interdependent networks, suggesting that both systems are in the same universality class. For weakly embedded systems (i.e., ζ of the order of the system size L) we observe a mixed-order transition, where the order parameter collapses following a long critical plateau. On the other hand, strongly embedded systems (i.e., ζ≪ L) exhibit a pure first order transition, involving nucleation and growth of damage. The system's critical behavior in both limits is the same as that observed in interdependent networks. Critical behavior of cascading failures in overloaded networks Shlomo Havlin ^1 Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP, F-69342, Lyon Cedex 07, France ^2 CNRS, Univ de Lyon, ENS de Lyon, Laboratoire de Physique, F-69342 Lyon, France ^3 Department of Network and Data Science, Central European University, 1100 Vienna, Austria ^4 Rényi Institute of Mathematics, 1053 Budapest, Hungary ^*Corresponding author: [email protected] =========================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Cascading failures and system collapse due to overloads have been modeled and studied within a network framework <cit.>. Relevant infrastructure such as power grids, transportation networks, and communication systems, many of which are embedded in two or three dimensional space <cit.>, are threatened by overloads in which even a small failure (e.g., deliberate attacks, natural disasters, or random malfunctions) may spread the overload failures, producing a partial or total collapse. Thus, understanding the origin, dynamics, and laws of cascading failures due to overloads is crucial for ensuring the stability, reliability, and resilience of infrastructure and services that we rely on every day. Far from ideal spatial systems such as lattices, many real world networks present links with a characteristic length ζ <cit.>. Several studies <cit.> model this structural property with a 2D lattice where the sites are the nodes of the network and the link-lengths are chosen from an exponential distribution, P(r) ∼ exp(-r/ζ) (the so called ζ-model), which produces networks with a dimension that changes from two, for small ζ (short links), to infinite dimension for large ζ (of order of the system linear size L) <cit.>. Thus, in the ζ-model, the parameter ζ represents the strength of the spatial embedding. A fundamental model for cascading failures due to overloads is the Motter and Lai (ML) model <cit.>, that introduced and defined the concept of load and overload for a node or element in a network. In this model, the load is defined as the number of shortest paths that pass through the node (or a link), and it is considered as a measure for node relevance in the transmission of some quantity (e.g., information or energy) throughout the system. They defined a threshold called capacity, which is proportional to the initial load and represents the maximum amount of load that a node can hold. Above this threshold, the node is regarded to be overloaded and fails. However, the shortest path is not always the optimal path <cit.>. Thus, a reasonable modification of this model is defining weighted networks, where links have associated weights that may indicate, for instance, the time (or cost) that it takes to travel across a given link. In this way, optimal paths, which represent the paths with minimal travel time (or cost) between nodes, are considered instead of shortest paths to define the node or link loads. Currently, the critical behavior and the universality class of the phase transition due to cascading failures induced by overloads have not been systematically studied. Here we study this phase transition in both, spatial ζ-model <cit.> and in Erdős-Rényi (ER) <cit.> networks, and we find indications that it belongs to the same universality class as percolation of interdependent networks <cit.>. We observe that for weakly or non spatially-embedded systems, like ER networks or the ζ-model for large ζ (of the order of the system's linear size L), there exists a mixed-order transition, similar to interdependent ER networks <cit.>. At and near this abrupt transition, we find a long term plateau in the order parameter characterized by critical exponents. In contrast, for strongly embedded networks, (i.e., ζ ≪ L), we observe a pure first order transition caused by nucleation of a random damage, a behavior also exhibited by interdependent lattices with dependencies of finite length or spatial multiplex networks <cit.>. § MODEL For the construction of the networks we use the ζ-model <cit.>. It consists of nodes located at the vertices of a two-dimensional lattice of size N = L × L and links created between two different nodes according to the following steps: 1) For each of the N nodes in the network, we assign integer coordinates (x,y), with x, y ∈ [1,L]. 2) We randomly select a node i with coordinates (x_i,y_i) and draw a ray of length r, taken from an exponential distribution P(r) ∼ exp(-r/ζ), and a random angle θ above the horizontal axis, uniformly distributed. 3) We link node i with node j, where j is the closest node to the end point of the ray, p, of real coordinates (p_x,p_y) = (x_i + r cos θ,y_i + r sinθ). We repeat the process until we build a network with an average degree ⟨ k ⟩ (we do not allow self or multiple connections, and we assume periodic boundary conditions). Note that it is easy to generalize the ζ model to any d-dimensional lattice. Regarding the cascade dynamics due to overloads, we study the ML model <cit.> in weighted networks, with positive weights that follow a Gaussian distribution. Considering this, the load of node i, L_i(t) ≡ L_i^t, is defined as the number of optimal paths between all pairs of nodes, excluding node i, that pass through node i at time t. The amount of load that a node can sustain at anytime is given by its capacity, C_i = L^0_i(1 + α), which is proportional to the initial load L^0_i. The parameter α is the tolerance of the system, and it represents the resilience of nodes to failure. We perform, at t = 1, a random link percolation process by removing a fraction 1 - p of links, p ∈ [0, 1]. As a result, optimal paths throughout the network change producing modifications in node loads, which may generate successive failures due to nodes that become overloaded, in a cascade manner (see Fig. <ref>). After removing the links, we advance one unit of time and compute the new loads. For t > 1, at each time step, node i fails if L_i^t > C_i. We repeat the process until there are no more failures in the network. The model presented above is not solvable analytically because of spatial constraints but it can be analyzed via numerical simulations, which are highly time-consuming even for a relatively small system size. To reduce the sensitivity of the results and produce smoother and consistent curves for a single realization, randomness is somewhat reduced. When doing percolation using a series of 1 - p values, we proceed as follows: if E_p_1 is the set of links that have been randomly removed for 1 - p_1 then, for a larger value 1 - p_2, we remove the same set of links E_p_1 and additional random links until we reach the value 1 - p_2. § RESULTS At the end of the cascading process, we analyze the relative size of the giant component, S(p) ≡ S, for weak and strong spatial embedding, i.e., for long and short ζ, respectively. This is seen in Fig. <ref>. In both limits, we find that the system undergoes an abrupt transition at a critical value p_c, such that S(p ≥ p_c) > 0 and S(p < p_c) ≈ 0. Nevertheless, we can distinguish two different behaviors at the vicinity of these transitions. For weak spatial embedding (ζ = 100, Fig. <ref> (a)) the system approaches criticality from the right (i.e., for p > p_c and S > 0) with a clear curvature that appears to be absent for strong embedding (ζ = 3, Fig. <ref> (b)). We characterize the weakly embedded system, near and at criticality, through a generalization of the critical exponent β for abrupt transitions <cit.>, with respect to S(p_c) > 0. Thus, for p close to the percolation threshold p_c, S(p)-S(p_c) ∼ (p-p_c)^β. Indeed, in the inset of Fig. <ref> (a), we show that the exponent β has a value of β≅ 0.5 for ζ = 100, which is in agreement with the usual mixed-order transition and with the value for interdependent random networks <cit.>. In contrast, for the case of strong spatial structure (ζ = 3, Fig. <ref> (b)), we do not observe a curvature with a critical exponent, but just a linear decrease followed by an abrupt collapse, suggesting a pure first order transition like interdependent spatial networks (see, e.g., Fig. 1 in <cit.>). Note that both behaviors near criticality found here, for low and large ζ, are very similar to those found in pure percolation (no overloads) of interdependent networks <cit.> for short range and long range dependency links, respectively. This suggests that the overload process plays a similar role to that of dependencies. Depending on the individual realization observed, both the critical threshold p_c and the mass of the giant component at p_c, M_c = N S_c, may vary (see Fig. 1 from Supplemental Material <cit.>). We focus next on mean field networks with long-range connectivity links (ζ = L) and study the fluctuations of these two quantities at criticality, σ(p_c) = (⟨p_c^2 ⟩ - ⟨ p_c ⟩^2)^1/2 and σ(M_c) = (⟨M_c^2 ⟩ - ⟨ M_c ⟩^2)^1/2, for different system sizes. Gross et al. <cit.> found for interdependent networks that a finite-size scaling analysis yields the relations σ(p_c) ∼ L^-1/ν', ν' = 2/d, σ(M_c) ∼ L^d'_f, d'_f = 3d/4, where d is the spatial dimension. In Fig. <ref>, we show that also for the ML overload model there exists a similar scaling behavior of the dispersions with the linear size of the system L, and that the values of the exponents are the same as in mixed-order transitions of interdependent networks <cit.> (i.e., for d = 2, ν' = 1 and d'_f = 3/2). Continuing the comparison between spatial and non-spatial networks, two types of transitions can be understood by observing, in the proximity of criticality, the way in which the cascade propagation evolves while reaching the steady state of the system. In Fig. <ref> (a), we show the time evolution of S for ζ = 100 and for several values of p, with p ≤ p_c. The total time of the cascade, τ, increases as the system gets closer to criticality and diverges at p_c (for N →∞; see Fig. <ref> for a finite-size scaling). Thus, it is a useful method to identify p_c for each realization as the value of p for which the maximal number of iterations occurs in the numerical simulations. This is analogous to the behavior found in interdependent networks <cit.>. These abrupt transitions are also characterized, close to criticality, by a plateau in S, where a microscopic amount of failures (Fig. <ref> (b)) keeps the cascade going on with a branching factor of η≈ 1 (Fig. <ref> (c)), meaning that a small number of failures at a given time step produces a similar small number of failures in the next step, for many steps of the order of N^1/3 (see Fig. <ref> (b)). Due to finite size effects, this phase does not last forever and, eventually, the amount of failed nodes starts to increase because of accumulated damage in the system, leading to an abrupt collapse <cit.>. In Fig. <ref> (d) we show, for this weakly embedded network with ζ = 100, the spatio-temporal distribution of the failures just above criticality. It is seen that the failures spread at all times over the whole network. This occurs because optimal paths that disappear after some node failures are likely to be replaced by paths that pass through nodes in distant sites of the network, due to long range connections and overloading these distant nodes. In marked contrast, the process for spatial networks (ζ = 3, Figs. <ref> (e)-(h)) is strikingly different. As the typical length of links is short (relative to the system linear size L), initial failures due to overloads may concentrate and spread to close neighbors (Fig. <ref> (h)). Eventually, the overload and the random failures create a hole of failed nodes within the functional giant component, which then grows spontaneously near criticality and spreads throughout the entire system causing its collapse. This phenomenon is known as nucleation (analogous to the well-known water-freezing nucleation transitions), and it has also been observed in interdependent lattices with dependency links of finite length <cit.> and in spatial multiplex networks <cit.>. In addition, the complete disintegration of the giant component happens in a prolonged time interval with a relatively short plateau stage (in contrast to weakly embedded systems, as seen in Fig. <ref> (a)). All in all, our results regarding the temporal evolution of the cascades as well as those corresponding to critical exponents, in the steady state, show a striking similarity with cascading failures in interdependent networks, therefore suggesting that both, i.e., overload and interdependent networks, belong to the same universality class. § CONCLUSIONS In this paper, we study the critical behavior and exponents characterizing the steady state and the dynamics of cascading failures due to overloads in both non-spatial and spatial networks. After initiating the overload cascade by randomly removing a fraction of links, we analyze how the spatial embedding strength, governed by the typical length of links ζ, affects the behavior of the system at criticality. We find that the steady state of this process is characterized by abrupt transitions, regardless of the strength of the spatial embedding. However, for weakly embedded or non-embedded systems we observe a usual mixed-order transition similar to that of interdependent random networks, with a critical exponent value of β = 0.5. Furthermore, exponent values that characterize the fluctuations of the quantities p_c and M(p_c), at criticality, are also in agreement with those of interdependent networks. These exponents characterize the correlation length and the fractal fluctuations of the order parameter. In contrast, strongly embedded networks do not show a curvature (singularity) in the order parameter near p_c, but rather a linear decrease in the giant component size, as in interdependent spatial networks, which is a characteristic of pure first order transitions. Regarding the dynamical aspects near the transition, weak and strong embedded systems also show a strikingly different behavior in the propagation of cascading failures. When studying the spatio-temporal propagation of failures, we find that for large ζ the failures spread through the whole network at all times. In contrast, for small ζ, initial failures are likely to initiate in a random location and propagate to nearby sites, yielding to a nucleation spreading process that is observed as well in spatial interdependent and multiplex networks <cit.> (see also the recent study by Choi et al. <cit.>). Since the phenomena and the critical exponents studied in this paper for overload failures are the same as those of interdependent networks, we suggest that both interdependent networks and overloads in networks belong to the same universality class. This is probably due to the two types of similar interactions in both systems. Our study represents an important contribution to the understanding of the mechanisms and the critical behavior of such catastrophic processes, especially systems for which there are no analytical approaches, such as cascading failures in overloaded networks. 31 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Motter and Lai(2002)]mott-02 author author A. E. Motter and author Y.-C. Lai, title title Cascade-based attacks on complex networks, https://doi.org/10.1103/PhysRevE.66.065102 journal journal Phys. Rev. E volume 66, pages 065102(R) (year 2002)NoStop [Motter(2004)]mott-04 author author A. E. Motter, title title Cascade control and defense in complex networks, https://doi.org/10.1103/PhysRevLett.93.098701 journal journal Phys. Rev. Lett. volume 93, pages 098701 (year 2004)NoStop [Watts and Strogatz(1998)]watt-98 author author D. J. Watts and author S. H. Strogatz, title title Collective dynamics of ‘small-world’ networks, https://doi.org/10.1038/30918 journal journal Nature volume 393, pages 440 (year 1998)NoStop [Barthélemy(2011)]bart-11 author author M. Barthélemy, title title Spatial networks, https://doi.org/https://doi.org/10.1016/j.physrep.2010.11.002 journal journal Physics Reports volume 499, pages 1 (year 2011)NoStop [Gross et al.(2017)Gross, Vaknin, Danziger, and Havlin]gros-17 author author B. Gross, author D. Vaknin, author M. M. Danziger, and author S. Havlin, title title Multi-universality and localized attacks in spatially embedded networks, https://doi.org/10.7566/JPSCP.16.011002 journal journal Proceedings of the Asia-Pacific Econophysics Conference 2016 - Big Data Analysis and Modeling toward Super Smart Society (APEC-SSS2016) , pages 011002 (year 2017)NoStop [Gross et al.(2022a)Gross, Bonamassa, and Havlin]gros-22-b author author B. Gross, author I. Bonamassa, and author S. Havlin, title title Fractal fluctuations at mixed-order transitions in interdependent networks, https://doi.org/10.1103/PhysRevLett.129.268301 journal journal Phys. Rev. Lett. volume 129, pages 268301 (year 2022a)NoStop [Waxman(1988)]wax-88 author author B. M. Waxman, title title Routing of multipoint connections, @noop journal journal IEEE J. Sel. Areas Commun. volume 6, pages 1617 (year 1988)NoStop [Daqing et al.(2011)Daqing, Kosmidis, Bunde, and Havlin]daq-11 author author L. Daqing, author K. Kosmidis, author A. Bunde, and author S. Havlin, title title Dimension of spatially embedded networks, https://doi.org/10.1038/nphys1932 journal journal Nature Physics volume 7, pages 481 (year 2011)NoStop [National Land Information Division, National Spatial Planning and Regional Policy Bureau, MILT of Japan(2012)]2012japan author author National Land Information Division, National Spatial Planning and Regional Policy Bureau, MILT of Japan, http://nlftp.mlit.go.jp/ksj/gml/datalist/KsjTmplt-N02.html journal journal National railway data (year 2012)NoStop [Danziger et al.(2016)Danziger, Shekhtman, Berezin, and Havlin]dan-16 author author M. M. Danziger, author L. M. Shekhtman, author Y. Berezin, and author S. Havlin, title title The effect of spatiality on multiplex networks, https://doi.org/10.1209/0295-5075/115/36002 journal journal EPL (Europhysics Letters) volume 115, pages 36002 (year 2016)NoStop [Vaknin et al.(2017)Vaknin, Danziger, and Havlin]vak-17 author author D. Vaknin, author M. M. Danziger, and author S. Havlin, title title Spreading of localized attacks in spatial multiplex networks, https://doi.org/10.1088/1367-2630/aa7b09 journal journal New Journal of Physics volume 19, pages 073037 (year 2017)NoStop [Perez et al.(2022)Perez, Porath, Rocca, Buldyrev, Braunstein, and Havlin]perez-22 author author I. A. Perez, author D. V. B. Porath, author C. E. L. Rocca, author S. V. Buldyrev, author L. A. Braunstein, and author S. Havlin, title title Cascading failures in isotropic and anisotropic spatial networks induced by localized attacks and overloads, https://doi.org/10.1088/1367-2630/ac652e journal journal New Journal of Physics volume 24, pages 043045 (year 2022)NoStop [Gotesdyner et al.(2022)Gotesdyner, Gross, Porath, and Havlin]gote-22 author author O. Gotesdyner, author B. Gross, author D. V. B. Porath, and author S. Havlin, title title Percolation on spatial anisotropic networks, https://doi.org/10.1088/1751-8121/ac6914 journal journal Journal of Physics A: Mathematical and Theoretical volume 55, pages 254003 (year 2022)NoStop [Havlin et al.(2005)Havlin, Braunstein, Buldyrev, Cohen, Kalisky, Sreenivasan, and Stanley]havl-05 author author S. Havlin, author L. A. Braunstein, author S. V. Buldyrev, author R. Cohen, author T. Kalisky, author S. Sreenivasan, and author H. E. Stanley, title title Optimal path in random networks with disorder: A mini review, https://doi.org/https://doi.org/10.1016/j.physa.2004.08.053 journal journal Physica A volume 346, pages 82 (year 2005)NoStop [Erdös and Rényi(1959)]erdos-59 author author P. Erdös and author A. Rényi, title title On random graphs I, @noop journal journal Publicationes Mathematicae Debrecen volume 6, pages 290 (year 1959)NoStop [Bunde and Havlin(1991)]bunde-91 author author A. Bunde and author S. Havlin, @noop title Fractals and disordered systems (publisher Springer-Verlag New York, Inc., year 1991)NoStop [Newman(2010)]new-10 author author M. E. J. Newman, https://doi.org/10.1093/acprof:oso/9780199206650.001.0001 title Networks: An Introduction (publisher Oxford University Press, year 2010)NoStop [Buldyrev et al.(2010)Buldyrev, Parshani, Paul, Stanley, and Havlin]bul-10 author author S. Buldyrev, author R. Parshani, author G. Paul, author H. Stanley, and author S. Havlin, title title Catastrophic cascade of failures in interdependent networks, https://doi.org/10.1038/nature08932 journal journal Nature volume 464, pages 1025 (year 2010)NoStop [Gao et al.(2011)Gao, Buldyrev, Stanley, and Havlin]gao-11 author author J. Gao, author S. Buldyrev, author H. Stanley, and author S. Havlin, title title Networks formed from interdependent networks, https://doi.org/10.1038/nphys2180 journal journal Nature Physics volume 8, pages 40 (year 2011)NoStop [Li et al.(2012)Li, Bashan, Buldyrev, Stanley, and Havlin]li-12 author author W. Li, author A. Bashan, author S. V. Buldyrev, author H. E. Stanley, and author S. Havlin, title title Cascading failures in interdependent lattice networks: The critical role of the length of dependency links, https://doi.org/10.1103/PhysRevLett.108.228702 journal journal Phys. Rev. Lett. volume 108, pages 228702 (year 2012)NoStop [Zhou et al.(2014)Zhou, Bashan, Cohen, Berezin, Shnerb, and Havlin]zho-14 author author D. Zhou, author A. Bashan, author R. Cohen, author Y. Berezin, author N. Shnerb, and author S. Havlin, title title Simultaneous first- and second-order percolation transitions in interdependent networks, https://doi.org/10.1103/PhysRevE.90.012803 journal journal Phys. Rev. E volume 90, pages 012803 (year 2014)NoStop [Danziger et al.(2014)Danziger, Bashan, Berezin, and Havlin]dan-14 author author M. M. Danziger, author A. Bashan, author Y. Berezin, and author S. Havlin, title title Percolation and cascade dynamics of spatial networks with partial dependency, https://doi.org/10.1093/comnet/cnu020 journal journal Journal of Complex Networks volume 2, pages 460 (year 2014)NoStop [Kiani et al.(2021)Kiani, Gomez-Cabrero, and Bianconi]kia-21 author author N. A. Kiani, author D. Gomez-Cabrero, and author G. Bianconi, https://doi.org/10.1017/9781108553711 title Networks of Networks in Biology: Concepts, Tools and Applications (publisher Cambridge University Press, year 2021)NoStop [Berezin et al.(2015)Berezin, Bashan, Danziger, Daqing, and Havlin]berez-15 author author Y. Berezin, author A. Bashan, author M. Danziger, author L. Daqing, and author S. Havlin, title title Localized attacks on spatially embedded networks with dependencies, https://doi.org/10.1038/srep08934 journal journal Scientific Reports volume 5, pages 8934 (year 2015)NoStop [Gross and Havlin(2022)]gros-22 author author B. Gross and author S. Havlin, https://doi.org/10.1017/9781009168076 title Percolation in Spatial Networks: Spatial Network Models Beyond Nearest Neighbours Structures, Elements in the Structure and Dynamics of Complex Networks (publisher Cambridge University Press, year 2022)NoStop [sm()]sm @noop title See Supplemental Material at [URL] for a plot of independent realizations of the steady state of the cascades for different values of ζ.Stop [Boccaletti et al.(2016)Boccaletti, Almendral, Guan, Leyva, Liu, Sendiña-Nadal, Wang, and Zou]bocc-16 author author S. Boccaletti, author J. Almendral, author S. Guan, author I. Leyva, author Z. Liu, author I. Sendiña-Nadal, author Z. Wang, and author Y. Zou, title title Explosive transitions in complex networks’ structure and dynamics: Percolation and synchronization, https://doi.org/https://doi.org/10.1016/j.physrep.2016.10.004 journal journal Physics Reports volume 660, pages 1 (year 2016)NoStop [Gross et al.(2022b)Gross, Bonamassa, and Havlin]gros-22-c author author B. Gross, author I. Bonamassa, and author S. Havlin, title title Fractal fluctuations at mixed-order transitions in interdependent networks, https://doi.org/10.1103/PhysRevLett.129.268301 journal journal Phys. Rev. Lett. volume 129, pages 268301 (year 2022b)NoStop [Zhao et al.(2016)Zhao, Li, Sanhedrai, Cohen, and Havlin]zhao-16 author author J. Zhao, author D. Li, author H. Sanhedrai, author R. Cohen, and author S. Havlin, title title Spatio-temporal propagation of cascading overload failures in spatially embedded networks, @noop journal journal Nature Communications volume 7, pages 1 (year 2016)NoStop [Bashan et al.(2012)Bashan, Berezin, Buldyrev, and Havlin]bas-12 author author A. Bashan, author Y. Berezin, author S. Buldyrev, and author S. Havlin, title title The extreme vulnerability of interdependent spatially embedded networks, https://doi.org/10.1038/nphys2727 journal journal Nature Physics volume 9, pages 667 (year 2012)NoStop [Choi et al.(2023)Choi, Cho, D'Souza, Kertész, and Kahng]cho-23 author author H. Choi, author Y. S. Cho, author R. D'Souza, author J. Kertész, and author B. Kahng, @noop title Unified framework for hybrid percolation transitions based on microscopic dynamics (year 2023), https://arxiv.org/abs/2307.03584 arXiv:2307.03584 NoStop § SUPPLEMENTAL MATERIAL
http://arxiv.org/abs/2307.05690v1
20230711180102
Ages, metallicities and structure of stellar clusters in the Magellanic Bridge
[ "Raphael A. P. Oliveira", "Francisco F. S. Maia", "Beatriz Barbuy", "Bruno Dias", "the VISCACHA collaboration" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.SR" ]
R. A. P. Oliveira et al. IAU Symposium 379: Assembly history of the Magellanic Bridge 17 2023 10.1017/xxxxx Proceedings of IAU Symposium 379 P. Bonifacio, M.-R. Cioni, F. Hammer, M. Pawlowski, and S. Taibi, eds. ^1 Universidade de São Paulo, IAG, Rua do Matão 1226, São Paulo 05508-090, Brazil ^2 Universidade Federal do Rio de Janeiro, Av. Athos da Silveira, 149, Rio de Janeiro 21941-909, Brazil ^3 Instituto de Alta Investigación, Sede Esmeralda, Universidad de Tarapacá, Av. Luis Emilio Recabarren 2477, Iquique, Chile The formation of the Magellanic Bridge during an encounter between the Magellanic Clouds ∼ 200 Myr ago would be imprinted in the chemical evolution and kinematics of its stellar population, with sites of active star formation. Since it contains hundreds of stellar clusters and associations, we combined deep photometry from VISCACHA and SMASH surveys to explore this topic, by deriving structural parameters, age, metallicity, distance and mass for 33 Bridge clusters with robust statistical tools. We identified a group of 13 clusters probably stripped from the Small Magellanic Cloud (0.5-6.8 Gyr, [Fe/H]<-0.6 dex) and another 15 probably formed in-situ (< 200 Myr, [Fe/H]∼-0.4 dex). Two metallicity dips were detected in the age-metallicity relation, coeval to the Stream and Bridge formation epochs. Cluster masses range from 500 to ∼ 10^4 M_⊙, and a new estimate of 3-5× 10^5 M_⊙ is obtained for the Bridge stellar mass. Magellanic Clouds, galaxies: evolution, galaxies: star clusters: general Ages, metallicities and structure of stellar clusters in the Magellanic Bridge Raphael A. P. Oliveira^1, Francisco F. S. Maia^2, Beatriz Barbuy^1, Bruno Dias^3 & the VISCACHA collaboration Indian Institute of Technology Kharagpur ================================================================================================================= § INTRODUCTION The Magellanic System contains the Large and Small Magellanic Clouds (LMC and SMC), Magellanic Bridge, Stream and Leading Arm. Besides containing the pair of satellites closest to the Milky Way, the Bridge is the nearest tidally-stripped structure. It is also the only one of the three gaseous structures that contain a significant stellar mass <cit.>, with hundreds of star clusters and associations <cit.>. The Bridge was first detected as an HI overdensity by <cit.>, but a blue, young stellar population was found decades later, more concentrated close to the SMC Wing and halfway the Bridge, and strongly correlated with the distribution and kinematics of the HI gas <cit.>. An old population was also found later, more spread across the Bridge. Based on different epochs of HST proper motions, two N-body models attempt to reproduce the formation of the LMC-SMC pair and determine if they were independent satellites of the Milky Way until the LMC captured the SMC ∼ 1.2 Gyr ago <cit.>, or an old interacting system in the first perigalactic passage, entering the Galactic potential ∼ 2 Gyr ago <cit.>. Despite being highly dependent on the total masses of the Milky Way and LMC, both models reproduce the Bridge formation during the most recent encounter between the LMC-SMC <cit.>, with gas and stars tidally stripped mostly from the SMC and possibly dragged from the LMC. This scenario would imply a gradient of increasing metallicity toward the LMC due to a minor contribution of its more metal-rich gas, and the presence of an old stellar population amidst a predominant young population formed in situ. In this work <cit.> we analyse deep photometry for 33 Bridge clusters in terms of structural and fundamental parameters in order to investigate their assembly history, spatial distribution and the existence of such gradients. § PHOTOMETRIC DATA AND METHODOLOGY The VISCACHA survey <cit.> uses the adaptive optics system of the SOAR 4-m telescope (SAM) to observe clusters in the outskirts of the Magellanic Clouds and Bridge. This very deep (V∼ 24 mag) photometry, with good spatial resolution (∼ 0.6^''), allowed us to derive precise age, metallicity, distance, mass and structural parameters for 33 Bridge clusters at RA < 3^h <cit.>. Observations with Goodman@SOAR are being carried out to cover the objects in RA > 3^h. A spectroscopic follow-up in the CaT region was also conducted to derive radial velocities and metallicity for clusters older than 1 Gyr (Dias et al., in preparation). Photometry from the SMASH survey <cit.> is also used as a complement, with a similar depth but lower spatial resolution in crowded regions. In order to ensure a homogeneous and self-consistent analysis, we implemented the Markov chain Monte Carlo (MCMC) sampling <cit.> both to the fitting of analytical functions to radial density profiles (RDPs) and statistical isochrone fitting to colour-magnitude diagrams (CMDs). The RDPs are obtained by computing the local stellar density around each star, using a variable aperture size. A likelihood function comparing the observed local density with the model <cit.> is evaluated and coupled to the MCMC to obtain a new centre, core and tidal radii, and central and background densities. Before the isochrone fitting, a membership analysis to exclude probable field stars are performed as described in <cit.>. The method compares the stars within a fraction of the tidal radius (cluster + field stars) with a nearby field and, based on their relative location in the CMD and radius to the cluster centre, computes a median membership value. For the isochrone fitting, we employ the code <cit.> to fit PARSEC isochrones <cit.> to the decontaminated V vs. V-I CMDs. In this case, the likelihood compares the position of each star in the CMD to the closest point of the tentative isochrone, so that the MCMC retrieves posterior distributions in age, metallicity, distance and reddening. The cluster mass is obtained by integrating the flux of all member stars, converting it to absolute magnitude and applying a calibration with age and metallicity provided in <cit.>. § RESULTS: CLUSTER PARAMETERS, GRADIENTS AND AGE-METALLICITY RELATION Figure <ref> gives the structural and fundamental parameters derived for the Wing/Bridge cluster HW77 with VISCACHA data. The observation was taken in optimal conditions for the use of adaptive optics, providing the deepest CMD and best seeing (∼0.5^'') of the sample. A large core radius of 22.7^'' was derived with a small concentration parameter c=log(r_t/r_c) = 0.54, whereas the age of 1.12 Gyr and [Fe/H]=-1.02 dex resulted in a mass of 2.2±0.6 × 10^3 M_⊙. The SMASH data for this cluster are nearly identical to VISCACHA, with the CMD g vs. g-i with very similar results and uncertainties (Oliveira et al., in preparation). The sample of 33 Wing/Bridge clusters resulted in concentration parameters between 0.46 and 1.25 (smaller than Galactic open and globular clusters, but consistent with the Magellanic Clouds clusters), ages from 4 Myr (HW81) to 6.8 Gyr (HW59), distances from 50 to 69 kpc, and masses from ∼ 500 (L101, ICA45) to 1.4× 10^4 M_⊙ (L113, L110). The mass of the 33 clusters add up to 10^5 M_⊙ which, extrapolating to the ∼ 100 clusters and ∼ 300 associations located at the Bridge, provide a new estimate of 3-5 × 10^5 M_⊙ for the Bridge stellar mass, more than one order of magnitude higher than <cit.>. Figure <ref> presents the projected distribution of all the objects from <cit.> together with the 33 sample clusters colour-coded by the derived age and metallicity. The older clusters are mostly isolated, located close to the SMC, whereas the young ones appear to be grouped along the Bridge. Two groups become evident: old clusters (>500 Myr) more metal-poor than -0.6 dex, vs. clusters younger than the Bridge, with [Fe/H]∼ -0.4 dex, probably formed in situ. Figure <ref> reproduces a figure from <cit.> with the sample clusters older or younger than 300 Myr as red or blue symbols, in order to check if our results follow the age and metallicity gradients of the SMC vicinity. As expected, the old clusters (probably stripped from the SMC) follow both gradients, with increasing age and decreasing metallicity until a∼4^∘ and an inversion after that. The young clusters does not show a pattern in age and have a nearly constant metallicity along the Bridge. The age-metallicity relation (AMR) is a valuable tool to analyse the chemical evolution of a galaxy, with hints of chemical enrichment or decrease in metallicity. The present results follow the chemical evolutionary models in most of the cases, but some clusters imprint two metallicity dips in the AMR: a larger one around 1.5 Gyr with the metallicity decreasing 0.4 dex, and a smaller one around 200 Myr ago, with a 0.3 dex decrease in metallicity. These epochs coincide with those of the formation of the Magellanic Stream and Bridge, respectively. According to the models <cit.>, such metallicity decrease is usually explained by an infall of more metal-poor gas, followed by a rapid chemical enrichment (and possibly an increase in the star formation rate). A complete interpretation of the results is given in <cit.>. [Besla et al.(2012)]2012MNRAS.421.2109B Besla, G., Kallivayalil, N., Hernquist, L., et al. 2012, MNRAS, 421, 2109 [Bica et al.(2020)]2020AJ....159...82B Bica, E., Westera, P., Kerber, L. O., et al. 2020, AJ, 159, 82 [Bressan et al.(2012)]2012MNRAS.427..127B Bressan, A., Marigo, P., Girardi, L., et al. 2012, MNRAS, 427, 127 [Choi et al.(2022)]2022ApJ...927..153C Choi, Y., Olsen K., Besla, G., et al. 2022, ApJ, 927, 153 [Diaz & Bekki(2011)]2011MNRAS.413.2015D Diaz, J. & Bekki, K. 2011, MNRAS, 413, 2015 [Foreman-Mackey et al.(2013)]2013PASP..125..306F Foreman-Mackey, D., Hogg, D. W., Lang, D., et al. 2013, PASP, 125, 306 [Harris(2007)]2007ApJ...658..345H Harris, J. 2007, ApJ, 658, 345 [Hindman et al.(1963)]1963AuJPh..16..570H Hindman, J. V., Kerr, F. J. & McGee, R. X. 1963, AuJPh, 16, 570 [King(1962)]1962AJ.....67..471K King, I. 1962, AJ, 67, 471 [Maia et al.(2010)]2010MNRAS.407.1875M Maia, F. F. S., Corradi, W. J. B. & Santos, J. F. C. 2010, MNRAS, 407, 1875 [Maia et al.(2014)]2014MNRAS.437.2005M Maia, F. F. S., Piatti, A. E., Santos, J. F. C. 2014, MNRAS, 437, 2005 [Maia et al.(2019)]2019MNRAS.484.5702M Maia, F. F. S., Dias, B., Santos, J. F. C., et al. 2019, MNRAS, 484, 5702 [Nidever et al.(2017)]2017AJ....154..199N Nidever, D. L., Olsen, K., Walker, A. R., et al. 2017, AJ, 154, 199 [Oliveira et al.(2023)]2023arXiv230605503O Oliveira, R. A. P., Maia, F. F. S., Barbuy, B., et al. 2023, arXiv e-prints, p. arXiv:2306.05503 [Santos et al.(2020)]2020MNRAS.498..205S Santos, J. F. C., Maia, F. F. S., Dias, B., et al. 2020, MNRAS, 498, 205 [Skowron et al.(2014)]2014ApJ...795..108S Skowron, D. M., Jacyszyn, A. M., Udalski, A., et al. 2014, ApJ, 795, 108 [Souza et al.(2020)]2020ApJ...890...38S Souza, S. O., Kerber, L. O., Barbuy, B., et al. 2020, ApJ, 890, 38 [Tsujimoto & Bekki(2009)]2009ApJ...700L..69T Tsujimoto, T. & Bekki, K. 2009, ApJ, 700, 69 [Zivick et al.(2018)]2018ApJ...864...55Z Zivick, P., Kallivayalil, N., van der Marel, R. P., et al. 2018, ApJ, 864, 55
http://arxiv.org/abs/2307.04365v1
20230710064447
One-Shot Pruning for Fast-adapting Pre-trained Models on Devices
[ "Haiyan Zhao", "Guodong Long" ]
cs.CV
[ "cs.CV", "cs.LG" ]
University of Technology Sydney, Sydney, Australia [email protected] [email protected] One-Shot Pruning for Fast-adapting Pre-trained Models on Devices Haiyan Zhao Guodong Long August 12, 2023 ================================================================ Large-scale pre-trained models have been remarkably successful in resolving downstream tasks. Nonetheless, deploying these models on low-capability devices still requires an effective approach, such as model pruning. However, pruning the model from scratch can pose a practical challenge given the limited resources of each downstream task or device. To tackle this issue, we present a scalable one-shot pruning method that leverages pruned knowledge of similar tasks to extract a sub-network from the pre-trained model for a new task. Specifically, we create a score mask using the pruned models of similar tasks to identify task-specific filters/nodes in the pre-trained model for the new task. Based on this mask, we conduct a single round of pruning to extract a suitably-sized sub-network that can quickly adapt to the new task with only a few training iterations. Our experimental analysis demonstrates the effectiveness of the proposed method on the convolutional neural networks (CNNs) and vision transformers (ViT) with various datasets. The proposed method consistently outperforms popular pruning baseline methods in terms of accuracy and efficiency when dealing with diverse downstream tasks with different memory constraints. § INTRODUCTION Large-scale pre-trained models have exhibited exceptional performance on a wide range of downstream tasks. For instance, CLIP <cit.> has surpassed the current state-of-the-art computer vision models on 27 downstream tasks, each having diverse distributions. However, these pre-trained models typically consist of millions of parameters, hindering their deployment on edge devices with limited memory and computation budgets. Previous studies <cit.> have demonstrated that only a subset of the filters/nodes in a pre-trained model are crucial for the inference process of a given downstream task. To address this issue, model pruning presents an effective approach wherein unnecessary filters/nodes can be removed without compromising accuracy. Conventional pruning methods in real-world applications often require repeated pruning of the pre-trained model to adapt to different downstream tasks and low-capability devices, resulting in a waste of computational power and time. Moreover, some devices may not have the capacity to prune large models from scratch due to memory and computation limitations. The question arises: Is it feasible to find a sparse sub-network within a pre-trained model that can quickly adapt to a new downstream task? Recent studies <cit.> have shown evidence of the lottery ticket hypothesis (LTH), which states that training from a sparse sub-network in a randomly initialized model can achieve comparable performance to the original dense network. However, LTH cannot reduce the number of training iterations required. Furthermore, LTH focuses solely on unstructured weight pruning, which may not necessarily improve the efficiency of training and inference of the pruned model. Tian et al. <cit.> developed a meta-model that is trained on hundreds of tasks to create a well-initialized pruned model, which can rapidly adapt to a new task within a few training iterations, thereby reducing computational costs. The meta-model is the same for all tasks. However, in practical scenarios, it is common for a pre-trained model to produce pruned models for downstream tasks or devices with varying memory constraints. Therefore, we propose to directly utilize prior knowledge from previous pruned models instead of training a new meta-model. For each downstream task, its pruned model retains only critical and task-specific filters/nodes from the pre-trained model. We investigate the relationship between the pruned models of downstream tasks with different similarities. We observe that tasks with high similarities share more task-specific filters/nodes in their pruned models. Based on this observation, this paper proposes a novel one-time pruning method called "Scalable Mask Selection Pruning (SMSP)", which is illustrated in Fig. <ref>. By learning from the pruned results of similar tasks, SMSP can create a mask to identify task-specific filters/nodes in the pre-trained model and prune the model once to extract a suitably sized sparse sub-network for a new task. SMSP is scalable because the created mask can be used to extract a sub-network of any pruning ratio from the pre-trained model to adapt to different devices. The sparse sub-network is then trained on the training data of the new task for a few iterations to quickly adapt to the new task. SMSP can significantly reduce the computation cost during pruning while maintaining the excellent performance of the pruned models. Extensive experiments have been conducted to evaluate the proposed method, which demonstrates that SMSP outperforms state-of-the-art pruning methods on CNN and ViT over several datasets. Furthermore, SMSP performs well when used to produce pruned models for tasks with different memory constraints and tasks from unseen datasets, which demonstrates the scalability and generality of SMSP. § RELATED WORKS Model pruning is a highly effective technique for compressing deep neural networks. Some existing works <cit.> apply iterative pruning approaches to reduce the model size by eliminating filters/nodes with small weights while minimizing the loss of accuracy. Alternatively, methods like HRank <cit.> and APoZ <cit.> evaluate the importance of each filter based on its corresponding activation maps. Another line of methods <cit.> maintains a mask for filters/nodes in the model to eliminate redundant parameters automatically. And this dynamic pruning setting is also widely used in the pruning of the vision transformer. Recent works <cit.> introduce learnable parameters to each attention head, node, layer, or block in the vision transformer to reduce the model's complexity. The approach of Goyal et al. <cit.> is different from traditional parameter pruning as they dynamically prune input patches in each block of ViT, resulting in significant reductions in inference computation without compromising the model's performance. Meanwhile, Tang et al.<cit.> evaluate the importance of each patch in maintaining the original final results. However, these pruning methods require starting the pruning process from scratch, which is time-consuming. In contrast, our method leverages pruned knowledge of similar tasks to reduce the number of pruning iterations significantly. Some other pruning methods aim to speed up the pruning process. Cai et al. <cit.> propose a once-for-all network that supports diverse settings by decoupling training and neural architecture search, which reduces the cost and makes it scalable for efficient inference across many devices and resource constraints. However, the generated pruned models are all for one task and cannot be generalized to other tasks. Tian et al.<cit.> proposed a meta method that trains a well-initialized pruned meta-model to quickly adapt to different few-shot tasks. However, this meta-model is the same for all tasks and cannot generalize to devices with varying memory constraints. MEST<cit.>, which is designed for edge devices, starts training from a sparse sub-network to save training computation. DLTH <cit.> is a variant of LTH and also starts from a well-designed sub-network. It claims that randomly extracted subnetworks from a randomly initialized dense network can be transformed into a well-performing sub-network that can achieve admirable performance compared to LTH. However, all these methods require a significant amount of time and computation to find the initialized sub-networks. In contrast, our proposed method can be applied to different downstream tasks, and it does not require any additional computation cost to extract a sub-network for each new task. § METHODOLOGY In this section, we establish a model pool consisting of the pruned models obtained from hundreds of tasks on both CNN and ViT. These pruned models are extracted to retain the task-specific knowledge present in the pre-trained model for each task. We observe that similar tasks tend to share more task-specific filters/nodes. Leveraging this observation, we propose a generic and scalable approach to reduce the computational cost of pruning for new tasks or devices. §.§ Pool of Pruned Models from Different Tasks A pruned model of the downstream task typically preserves filters/nodes that are indispensable for its inference in the pre-trained model. In practice, a dataset of pruned models exists owing to the extensive utilization of large-scale models across various downstream tasks and devices. In this paper, to emulate this situation, we construct a simplified dataset of pruned models for different tasks and devices using the same pre-trained models. Automatic Mask Pruning (AMP). Inspired by <cit.>, we propose automatic mask pruning (AMP) to automatically identify task-specific filters/nodes for different tasks in the pre-trained model. Algorithm <ref> provides a detailed outline of the AMP process. Specifically, given a pre-trained network F(·;Θ) with parameter Θ and a training set D_t of a new target task t, let Θ^t={θ^t_i}_i=1:n where θ^t_i denotes every filter/head/node-i in the network. By adding a mask, we incorporate a learnable mask score S^t_i to each prunable filter/head/node-i in the pre-trained model. We define an operator ⊙ applied to Θ^t and its associated scores S^t as (Θ^t⊙ S^t)[i]≜Θ^t[i] · S^t[i] During the pruning process, these differentiable scores are optimized along with model parameters. To encourage sparsity, an additional L1 regularization loss is applied and filters/nodes with scores below a predefined threshold will be pruned. The final objective function of AMP is defined as follows: min_{S^t_i}_i=1:n𝔼_(x,y)∼ D_tl(y, F(x;Θ^t⊙ S^t))+ λS^t_1 where y represents the ground truth for x, l denotes the cross-entropy loss, and λ is the weight used to balance between the two losses. We apply AMP to prune two major categories of pre-trained models, i.e., CNN and ViT, for diverse tasks with different memory constraints. Specifically, we select ResNet-18(ResNet-50)<cit.> pre-trained on CIFAR-100<cit.>(ImageNet <cit.>) for CNN, and apply AMP to multiply the mask score to each filter in the network. For ViT, we use DeiT-S <cit.> pre-trained on ImageNet. As reported in previous work <cit.>, only some attention heads in deep pre-trained transformers are necessary for downstream tasks. Therefore, AMP is used to prune ViT at two levels: heads in the multi-head attention modules and nodes in the feed-forward network modules. In the pool of pruned models, tasks for ResNet-18, ResNet-50, and ViT are randomly sampled from classes in CIFAR-100 and ImageNet datasets, respectively. To verify the generality and scalability of our proposed method, we collect the pruned models of diverse tasks, which can be divided into 3 groups: 3-classes, 5-classes and 10-classes classification tasks, each containing 300 tasks. To emulate the memory limitations of various devices, we store pruned models with varying pruning ratios for each task in our model pool. Due to the high memory costs of storing each pruned model, we have modified the AMP algorithm such that only mask scores are optimized with regularization, while all pre-trained model parameters remain fixed. This modification facilitates accurate masking for each task to identify task-specific knowledge in the pre-trained model. As all tasks can share the same pre-trained model during inference, we only record the class labels C^t and the mask S^t for each task t. The mask scores of pruned filters/nodes are then set to 0. §.§ Knowledge Shared between Tasks In the realm of multi-task/lifelong learning methods, similar tasks usually share more parameters in the network. In this section, we study the overlap of pruned models for similar tasks to verify whether more similar downstream tasks share more parameters in the pre-trained model. To compute the similarity between downstream tasks, we apply the Log Expected Empirical Prediction (LEEP) <cit.>, which is used to evaluate the transferability of representations learned by the source task to the target task. This method only requires running the target task's data through the pruned model once to compute the LEEP score. Overlap of task-specific filters/nodes. Upon applying AMP to a new task, filters or nodes that have small mask scores will be pruned, whereas those with high mask scores, which contain task-specific knowledge relevant to the downstream task, can be retained in the model. So we focus on the overlap of these high-score filters/nodes between tasks. Given the pruned model of a task m, the set of filters/nodes Ω^m retained in the pre-trained model are sorted according to their mask scores {S^m_i}_i∈Ω^m in the descending order. Ω^m_k denotes the filters/nodes with top-k mask score values in the mask of task m. For each pair of tasks, say task m and task n (using the same pre-trained model), we compute the overlap ratio R of filters/nodes with top-k score values in their masks, i.e., R = |Ω^m_k ∩Ω^n_k|/k. In Fig. <ref>, we present the overlap ratio of retained filters/nodes in various pre-trained models for tasks with varying degrees of similarity. The x-axis of Fig. <ref> represents the top-k filters/heads/nodes with the highest mask scores in the pruned model, while the y-axis represents the overlap ratio of top-k filters in the pruned models of two similar tasks. Given a new task, we calculate its LEEP similarities to the existing tasks in the model pool. Then we sort these LEEP similarities and partition them into three groups of equal intervals. Existing tasks whose similarity scores fall into a specific interval will be assigned to the corresponding similarity group. From similarity group 1 to group 3 in Fig. <ref>, the similarities between tasks decrease. We observed from all three plots in Fig. <ref> that the overlap ratios of tasks belonging to similarity group 1 are considerably greater than those of tasks in similarity group 3. This indicates that the pruned models of more similar tasks share a significantly higher number of task-specific filters/heads/nodes. Hence, the pruned models of previous similar tasks can be utilized to identify task-specific parameters in the pre-trained model, expediting the pruning of the new task. On the other hand, as the value of k increases, the overlap ratios in three plots grow gradually. This can be attributed to the fact that certain filters/heads/nodes with high mask scores in one task may be retained by another task with smaller scores. These filters/nodes have varying importance for different tasks and may serve distinct roles. In plot (c), we observe that the overlap ratios begin to converge when k exceeds 6. This is due to the fact that only a small number of heads (approximately 8) are preserved in the pruned model of each task. §.§ Scalable Mask Selection Pruning (SMSP) Inspired by the above discovery, we propose a generic and simple method called “Scalable Mask Selection Pruning (SMSP)" to fast-adapt the pre-trained model to downstream tasks. The process of generating a mask for each new task is illustrated in Figure <ref>. SMSP leverages the knowledge of pruned models for similar tasks to create a pruning mask of the pre-trained model for a new task. The detailed process of SMSP is shown in Alg. <ref>. Specifically, given a new task t, SMSP first calculates its LEEP similarities <cit.> to tasks in the pool and samples M similar neighbor tasks M^t. The mask scores S^t of task t are computed by summing the mask scores of all selected similar tasks, as shown below: {S^t_i}_i=1:n = ∑_m=1^MS^m_i Here, n represents the total number of filters/heads/nodes in the model, and M represents the total number of selected similar tasks. As filters/nodes with high scores in S^t have been shown to play essential roles in similar tasks, it is likely that they contain task-specific knowledge relevant to the new target task t. We sort the mask score of task t in descending order. Given any pruning ratio r, SMSP prunes r*n filters with the smallest mask scores once to meet the memory constraint. The training objective of SMSP is: min 𝔼_(x,y)∼ D_tl(y, F(x;θ^t_i: i∈Ω)) where θ^t_i: i∈Ω represents filters/nodes retained after pruning. In the retained sub-network, the mask is removed, and all the parameters are inherited from the original pre-trained model. SMSP trains the sub-network on the new target task's data for only a few iterations to speed up pruning. § EXPERIMENTS In this section, we evaluate SMSP by pruning ResNet and ViT for downstream tasks from several datasets and compare its results with SOTA pruning methods. We validate the scalability and generality of SMSP by generating pruned models for tasks with different memory constraints. Finally, we study the effect of the mask, the number of similar tasks and task similarities on SMSP. §.§ Experimental Settings For each experiment scenario, we randomly sample 50 test tasks from the dataset. Each test task selects its similar tasks from the pool of pruned models according to their LEEP similarities. To make our study more solid, classes in selected similar tasks are disjoint from those in the test task so that their training data are totally different. In our experiments, we conduct a grid search on a small subset of test tasks to tune the hyperparameters, which are then applied to all tasks. When applying SMSP to prune ResNet, we utilize SGD to train the sub-network and apply cosine annealing learning rate. The batch size is set to 128, and the initial learning rate is set to 0.01. For experiments of ViT, we follow previous works<cit.> and use the optimizer of AdamW with the cosine-annealing learning rate. During training, we use a batch size of 256 and a smaller initial learning rate of 0.0002. All results shown in this section are averaged over 50 test tasks. §.§ Comparison with SOTA Methods We compare our method with several SOTA pruning methods. To demonstrate our method's effectiveness, we compare it with AMP, a conventional pruning method that prunes the pre-trained model from scratch using a large number of pruning iterations. For tasks on ResNet, we also include two popular pruning methods as baselines: Feature Pruning <cit.> and Taylor Pruning <cit.>. Feature Pruning calculates the importance of filters by averaging the activation values over all training samples, while Taylor Pruning measures the impact of removing each filter on the final loss to determine their importance. We also compare our method with some popular methods that accelerate pruning. For example, IHT-based Reptile <cit.> learns a well-initialized pruned meta-model on a set of training tasks. For each new task, it can obtain the final pruned model by training the meta-model for a few iterations. DLTH <cit.> is a variant of LTH, which extracts a winning ticket for each task. MEST <cit.> can accelerate pruning by training from a sparse sub-network. For pruning ViT, we compare SMSP with PoWER <cit.>, which proposes to dynamically prune the input patches of each block in ViT, and UVC <cit.>, which not only prunes heads and nodes but also unimportant layers and blocks in the model. The results of comparing SMSP with the baseline methods for ResNet and ViT are presented in Tab. <ref> and Tab. <ref>, respectively. All results are obtained by pruning 5-classes classification tasks with a pruning ratio of 90%. The findings indicate that, for both ResNet and ViT, SMSP performs slightly better than AMP, which requires significantly more pruning iterations. Although Feature Pruning and Taylor Pruning also yield similar or slightly better results than SMSP for ResNet-18 and ResNet-50, they demand significantly more computational resources than SMSP. Moreover, SMSP surpasses IHT-based Reptile by a large margin, despite the fact that both approaches leverage knowledge from multiple tasks. Unlike IHT-based Reptile, which employs the same pruned meta-model for each new task, SMSP extracts different sub-networks for different tasks, composed of task-specific parameters, which can enhance performance. Furthermore, the performance of SMSP outperforms DLTH and MEST, which, like SMSP, start with a well-designed sub-network. However, neither DLTH nor MEST has task-specific knowledge in their initialized pruned model, while SMSP initializes the sub-network by leveraging knowledge from similar tasks. The outcomes presented in Tab. <ref> demonstrate that SMSP significantly outperforms baseline methods for ViT. Owing to a relatively low number of training iterations, neither UVC nor PoWER can recover the accuracy when a considerable number of parameters or patches are eliminated. Conversely, SMSP leverages a sub-network created by similar tasks as an initialization, hence, only a few training iterations are necessary to construct a well-performing pruned model. §.§ Evaluation of Scalability and Generality Our proposed SMSP is scalable in two folds. 1) SMSP can produce a promising pruned model for a new task of any memory constraint with a few training iterations. 2) All pruned models of tasks with varying data distribution and sizes can be selected as similar tasks to accelerate the pruning of the new task. Applying SMSP to tasks of different sizes. In Tab. <ref>, we show the results of applying SMSP to tasks of different sizes. The pruning ratios of all tasks are set to 90%. In the table, we find that for test tasks of different sizes, when we use the 5-classes similar tasks to extract the sub-networks for the test tasks, its performance is better than that of the 3-classes similar tasks. This is because similar tasks containing more classes can better differentiate data from different classes. Similar tasks of large sizes can extract more accurate task-specific filters/nodes for a given new task. Applying SMSP to tasks of different memory constraints. In Tab. <ref>, we apply SMSP to tasks of varying memory constraints. All the tasks are 5-classes classification tasks. We observe that SMSP outperforms AMP when transferring between different pruning ratios. Additionally, SMSP performs better when the pruning ratios of similar tasks and test tasks are the same. This could be attributed to the fact that in a pruned model with a small pruning ratio, some redundant filters/nodes are preserved in the mask, whereas in a pruned model with a large pruning ratio, some task-specific filters/nodes will be removed. An interesting finding is that SMSP can leverage similar tasks with large pruning ratios to generate a well-performing pruned model of a smaller pruning ratio for a new task. This demonstrates the superiority of using pruned results of similar tasks as prior knowledge. Performance on unseen tasks. To validate the generality of SMSP, we randomly sample 50 test tasks from Caltech-256 <cit.>. SMSP produces pruned models for these test tasks by learning from pruned results of tasks from ViT trained on ImageNet. The pre-trained ViT and similar tasks in the pool of pruned results never see the data of Caltech-256. All the test tasks are 5-classes classification tasks with the pruning ratio of 90%. In Tab. <ref>, we show the results of applying SMSP to Caltech-256 and compare it with AMP. The results show that SMSP can achieve comparable performance as AMP, which uses 10x training iterations. This indicates that SMSP can also identify task-specific heads/nodes in the pre-trained ViT for each unseen task from Caltech-256, so only a few training iterations suffice to produce a well-performed pruned model, showing the generality of SMSP to diverse datasets. §.§ Ablation Study Effect of the mask. The main contribution of SMSP is its ability to leverage the pruned results of similar tasks to generate the task-specific mask for each new test task. To validate the efficacy of the masks produced by SMSP, we randomly generate a mask for each task using the same pruning ratio and compare their performance with that of SMSP. In Tab. <ref>, we observe that for tasks using ResNet-18 and ViT, the performance of random masks is significantly worse than that of SMSP. These results suggest that the masks generated by SMSP can effectively identify filters/nodes that are relevant to the new target tasks. Effect of the number of similar tasks. In plot (a) of Fig.<ref>, we study the effect of the number of similar tasks for each new task. For both tasks from ResNet-18 and ViT, as the number of similar tasks increases, the performance of SMSP also improves. This is because more pruned results of similar tasks can provide more task-specific knowledge for the new task. When the number >8, SMSP converges, which indicates that 8 similar tasks for each task in SMSP are enough to create a high-quality mask. Effect of task similarities. In plot (b) of Fig.<ref>, we compare the performance of SMSP when tasks with different similarities are used. The accuracy of using pruned models with higher similarities is always better than that of lower similarities, which implies that tasks with high similarities share more knowledge with new target tasks. This observation aligns with the findings presented in Section <ref>. The plot also illustrates that SMSP converges when the training iterations >80, indicating that only a limited number of training iterations will be enough for SMSP to build a promising pruned model. § CONCLUSION In this paper, we propose a generic one-shot pruning method called SMSP to fast-adapt the pre-trained model to downstream tasks. Based on the discovery that tasks with high similarities share more filters/nodes in their pruned models, given a new task, SMSP leverages the knowledge from the pruned models of its similar tasks to extract a sub-network from the pre-trained model. Then, a few training steps on the sub-network can reach a high-quality pruned model. Our experiments demonstrate that SMSP achieves SOTA results in terms of both accuracy and efficiency across various datasets and pre-trained models. splncs04
http://arxiv.org/abs/2307.04537v1
20230710130246
Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception
[ "Chi-Chih Chang", "Wei-Cheng Lin", "Pei-Shuo Wang", "Sheng-Feng Yu", "Yu-Chen Lu", "Kuan-Cheng Lin", "Kai-Chiang Wu" ]
cs.CV
[ "cs.CV", "cs.AI" ]
𝐱 ŁL Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception Chi-Chih Chang11, Wei-Cheng, Lin11, Pei-Shuo Wang11, Sheng-Feng Yu112, Yu-Chen Lu112, Kuan-Cheng Lin11 and Kai-Chiang Wu1 1 National Yang Ming Chiao Tung University 2 Macronix International Co., Ltd. August 12, 2023 =========================================================================================================================================================================================================================== In this work, we present an efficient and quantization-aware panoptic driving perception model (Q-YOLOP) for object detection, drivable area segmentation, and lane line segmentation, in the context of autonomous driving. Our model employs the Efficient Layer Aggregation Network (ELAN) as its backbone and task-specific heads for each task. We employ a four-stage training process that includes pretraining on the BDD100K dataset, finetuning on both the BDD100K and iVS datasets, and quantization-aware training (QAT) on BDD100K. During the training process, we use powerful data augmentation techniques, such as random perspective and mosaic, and train the model on a combination of the BDD100K and iVS datasets. Both strategies enhance the model’s generalization capabilities. The proposed model achieves state-of-the-art performance with an [email protected] of 0.622 for object detection and an mIoU of 0.612 for segmentation, while maintaining low computational and memory requirements. Object detection, semantic segmentation, quantization-aware training, autonomous driving § INTRODUCTION Panoptic perception systems are critical components of autonomous cars, enabling them to perceive and understand their environment comprehensively. These systems solve multiple vision tasks simultaneously, including object detection, lane line segmentation, drivable area segmentation, and generate a rich understanding of the road scene. In order to solve the multi-task problem for panoptic driving perception, we develop a low-power, multi-task model tailored for traffic scenarios, addressing the challenges of object detection and semantic segmentation. The aim is to create efficient algorithms capable of accurately recognizing objects and segmenting both lane line and drivable area while maintaining minimal computational cost, rendering them ideal for deployment in resource-constrained environments such as mobile devices, IoT devices, and embedded systems. To achieve low-power consumption, we adopt a neural network architectures optimized for energy efficiency. The development process involves reducing the size and complexity of the models used for object detection and segmentation, as well as quantizing the model to minimize energy consumption. Our panoptic driving perception system reaches 93.46 FPS on NVIDIA V100 and 3.68 FPS on MediaTek Dimensity 9200 Series Platform. Meanwhile, it attains 0.622 mAP and 0.612 mIoU on the object detection and segmentation tasks of the competition iVS dataset. § METHOD Our model, derived from YOLOPv2 <cit.> and YOLOv7 <cit.>, is specifically designed to address both object detection and segmentation tasks. It comprises five main components: the backbone, the neck, the detection head, drivable area segmentation head, and lane line segmentation head. The backbone is Efficient Layer Aggregation Network (ELAN) <cit.>, optimized for rapid and efficient feature extraction. The neck of our model is a Spatial Pyramid Pooling (SPP) network <cit.>, which facilitates the handling of objects with varying scales and sizes by pooling features at multiple resolutions. This enhancement improves the accuracy and robustness of object detection. The detection head is based on RepConv <cit.>, an innovative neural network architecture that merges the efficiency of mobile networks with the accuracy of more complex models. Subsequently, a non-maximum suppression is applied to the output of object detection process to generate the final predictions. Consequently, our model is capable of accurately detecting objects in images while managing computation and memory requirements. Furthermore, in addition to object detection, our neural network also encompasses task-specific heads for drivable area segmentation and lane line segmentation. These dedicated heads possess distinct network structures that are optimized for their respective tasks. As drivable area segmentation and lane line segmentation generate separate predictions, we allow the result of lane line segmentation to overlap with the result of drivable area segmentation. In summary, our model is engineered to optimize efficiency and accuracy while also addressing the challenges associated with multi-task. Its unique combination of components and specialized task heads make it ideal for real-world applications such as autonomous driving and object recognition in resource-constrained environments. A visual representation of our model architecture is presented in Figure <ref>. §.§ Loss Function As we modify the head of YOLOPv2 <cit.> to support multi-label prediction, we introduce the loss function derived from HybridNets <cit.> to enhance the performance of our approach. The loss function for objection detection task consists of three components, L_det = α_1 L_class + α_2 L_obj + α_3 L_box Specifically, for L_det, focal loss is used in both L_class and L_obj. The classification loss, L_class, is responsible for penalizing classification errors, while L_obj is used for predicting object confidence. Both terms are implemented by focal loss <cit.>. The term L_box represents the similarity between the predicted results and ground truth by considering the overlap rate, aspect ratio, and scale. We implement L_box using the smooth L1 loss function. The coefficient α_1, α_2, and α_3 are hyperparameters used to balance the detection losses. The objective for lane line segmentation task combines three components, L_seg_ll = β_1 L_Tversky + β_2 L_Focal + β_3 L_Jaccard The first term Tversky loss <cit.>, L_Tversky, is used to address the issue of data imbalance and achieve much better trade-off between precision and recall, and the second term L_Focal aims to minimize the classification error between pixels and focuses on hard labels. The third term, L_Jaccard, is utilized to measure the similarity between prediction and ground-truth segmentation masks. The coefficient β_1, β_2 and β_3 are hyperparameters used to balance losses. On the other hand, the objective for drivable area segmentation task only combines two components: L_seg_da = γ_1 L_Tversky + γ_2 L_Focal The coefficient γ_1 and γ_2 are hyperparameters used to balance the losses. The overall objective, L_all, for our final model combines the object detection loss L_det and the segmentation loss L_seg to learn both tasks at the same time: L_all = δ_1 L_det + δ_2 L_seg_da + δ_3 L_seg_ll The coefficient δ_1, δ_2 and δ_3 are hyperparameters used to balance the detection loss and segmentation losses. §.§ Quantization Quantization-Aware Training (QAT) is a technique aimed at making neural networks more amenable to quantization. During QAT, we introduce the quantization error during training by sequentially applying quantize and dequantize operations. This enables the network to learn more robust representations that can be efficiently quantized during inference. We employ the Straight-Through Estimator (STE) <cit.> algorithm for QAT, which offers a simple and efficient approach. With STE, we round the weights and activations to the nearest quantization level during forward propagation, while utilizing the gradients of the unquantized values during backward propagation. In this manner, the network can backpropagate the gradients through the quantization operation, which is not differentiable in its original form. By simulating the quantization error during training, we can ensure that the network learns robust features that are less sensitive to quantization. § IMPLEMENTATION DETAIL §.§ Data Preparation As the organizers of the contest provided only a portion of the BDD100K <cit.> dataset, we opted to use the complete BDD100K dataset to augment the training data. In previous works that used the BDD100K dataset for semantic segmentation, the focus was typically on segmenting only the drivable areas and lane lines. There were no attempts to further classify the drivable areas or lane lines into multiple categories. However, our semantic segmentation task involves categorizing images into six classes: background, main lane, alternative lane, single line, double line, and dashed line. This is different from previous works, which only segmented images into two classes: line and lane. Therefore, we re-generate the six classes of segmentation labels for the BDD100K dataset. For the object detection task, the objective is to detect four types of objects: pedestrian, vehicle, scooter, and bicycle. In the case of scooters and bicycles, both the rider and the respective vehicle are included within the bounding box. However, the BDD100K dataset labels riders, scooters, and bicycles as distinct entities, as depicted in the following figure. To comply with the task requirements, we employ the Hungarian algorithm <cit.> to pair riders with their corresponding scooters or bicycles and label them within the same bounding box. §.§ Training Process In our experiments, the training process consists of several stages: 1) initial pretraining on the BDD100K <cit.> dataset, then 2) pretraining on the BDD100K with mosaic augmentation <cit.>, 3) finetuning on both BDD100K and iVS datasets, 4) quantization-aware training (QAT) on the integrated iVS and BDD100K datasets. Initially, we train our model on the BDD100K dataset without mosaic for 300 epochs, then turning on mosaic augmentation for 150 epochs. Subsequently, we jointly train the model on both the BDD100K and iVS datasets for an additional 150 epochs. Finally, we apply QAT <cit.> for an extra 20 epochs for quantization. Data Augmentation Techniques. To enhance the model's generalization capabilities, we apply several data augmentation techniques during the training process. These techniques include normalization, random perspective transformation, HSV color space augmentation, horizontal flipping, and mosaic. By simulating variations that may occur in real-world scenarios, these techniques improve the model's ability to adapt to new data. The mosaic technique turns on in the second and third stages, and it is turned off for the last 10 epochs of third stage. In detail, all images is normalized with mean (0.485, 0.456, 0.406) and std (0.229, 0.224, 0.225), random perspective transforming with scale factor 0.25, and translation factor 0.1. For HSV color space augmentation, the factor of Hue augmentation is 0.015, the factor of Saturation augmentation is 0.7, and the factor of Value augmentation is 0.4. Weight Initialization. The weight of the backbone and detection head of our model is initialized from YOLOv7 <cit.> pretrained weight, while the other parameters are all random initialized. Implementation Details. We resize all images to 384 × 640 of both BDD100K <cit.> and iVS datasets. The Adam optimizer is used for optimization. Different batch sizes are used for different stages, with 32 during first and second pretraining, 32 during finetuning, and 16 during quantization-aware training (QAT). The default anchor sizes are set as (12,16), (19,36), (40,28), (36,75), (76,55), (72,146), (142,110), (192,243), and (459,401). The learning rate scheduler employed is cosine annealing with a warm-up phase, and the initial learning rates are set to 1e-2 during first pretraining, 5e-3 during second pretraining, 5e-4 during finetuning, and 5e-5 during QAT. The minimum learning rates are set to 1e-5 during first pretraining, 5e-6 during second pretraining, 5e-7 during finetuning, and 5e-8 during QAT. The warm-up phase is set to 5 epochs during pretraining and 0 epochs during finetuning and QAT. The values of the coefficients for the losses are reported as follows: α_1 = 0.5, α_2 = 1.0, α_3 = 0.05, β_1 = 1.0, β_2 = 1.0, β_3 = 1.0, δ_1 = 1.0, δ_2 = 1.0, γ_1 = 0.2, γ_2 = 0.2, and γ_3 = 0.2. These coefficients are used in the computation of the loss function, which is a crucial component of our proposed method. §.§ Inference Process The inference process involves pre-processing the input images, which includes resizing from 1080 × 1920 to 384 × 640. Following this, images are normalized with mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). The post-processing steps for the detection and segmentation parts are carried out. In the detection part, the intersection over union (IoU) threshold of non-maximum suppression (NMS) is set to 0.25, and the confidence threshold is set to 0.05. In the segmentation part, the results from the two segmentation heads are merged, and the output is upsampled from 384 × 640 to 1080 × 1920. § EXPERIMENTAL RESULTS §.§ Environment Setup We conducted our experiments using 8 Nvidia V100 GPUs for training. PyTorch 1.10 <cit.> and TensorFlow 2.8.0 <cit.> were used to implement our models and training pipeline, while OpenCV 4.6.0 <cit.> was used for image pre-processing. Our model architecture was based on the publicly available PyTorch implementations of YOLOP <cit.> and YOLOv7 <cit.>. To migrate the model from PyTorch to TensorFlow, we first translated the PyTorch model into ONNX[https://onnx.ai/] format, and then used the onnx2tflite[https://github.com/MPolaris/onnx2tflite] toolkit to convert ONNX into TensorFlow (.h5) and TFLite model (.tflite). §.§ Main Results We present the performance of our model on the final testing dataset provided by the contest organizer at different training stages. Initially, we trained the model only on the BDD100K <cit.> dataset. However, due to the variation in the data distribution between BDD100K and the target task, the model may not be able to generalize well on the target task. To address this issue, we added the iVS dataset to the training process and performed mix data finetuning (i.e. the third stage). This approach enabled the model to adapt itself to better fit the target task, as the iVS dataset provided additional data with a similar data distribution to the target task. By training on this diverse dataset, the model was able to learn more effectively from the data and improve its performance on the target task. The performance of our proposed model is evaluated through various training stages. In the pretraining without mosaic stage, as depicted in Table <ref>, the model is trained on BDD100K dataset, which effectively boosts the performance of all. Based on YOLOv4 <cit.>, we integrate mosaic technology in our model training. However, in the pretraining stage with mosaic shown in Table <ref>, we notice a decrease in performance across all tasks. The implementation of the mosaic technique does not yield improved performance, which could potentially be attributed to its training exclusively on the BDD100K dataset. As a result, the model may be more suited to the BDD100K dataset, leading to a slight decline in performance when applied to the iVS dataset. Nevertheless, further finetuning on the iVS dataset enables the model to achieve enhanced performance. In the third stage, the model is finetuned using a mix of the BDD100K and iVS datasets with mosaic augmentation, which resulted in a significant improvement in object detection and lane line segmentation performance. Additionally, in the last 10 epochs, the mosaic augmentation was turned off to allow the model to recover its adaptability to normal images. §.§ Testing Results in the Competition Table <ref> shows the testing results of public dataset in the competition provided by the contest organizer. Our approach is effective for both object detection and segmentation tasks, achieving 0.495 mAP and 0.401 mIoU on pretraining with mosaic stage. Finetuning the model on the mix dataset improved the performance to 0.540 mAP and 0.615 mIoU, demonstrating the importance of the mix dataset in overcoming domain shift. Applying QAT to the finetuned model not only maintained the model's performance but also improved the detection task, which achieved 0.622 mAP and 0.612 mIoU. The testing results of private dataset in the competition provided by the contest organizer is shown in Table <ref>. Our approach achieves state-of-the-art performance in both object detection and segmentation tasks, with 0.421 mAP and 0.612 mIoU. Moreover, Table <ref> shows that our quantization strategy effectively reduced the model size by 4 times and improved inference speed by 3 times. These results demonstrate the effectiveness of our quantization strategy not only in improving model performance but also in reducing computational cost and memory footprint, which is important for real-world deployment of deep learning models. §.§ Quantization Strategy The performance of the quantized network using different quantization paradigms is presented in Table <ref>. We first observe that Post-Training Quantization led to a significant performance drop in the segmentation tasks, with only 0.285 and 0.248 mIoU achieved for drivable area and lane line segmentation, respectively. However, this performance drop can be mitigated by adopting a Quantization-Aware Training (QAT) strategy. Our experimental results demonstrate the effectiveness of QAT in mitigating the performance drop caused by quantization. Specifically, the quantized network achieved an 0.569 mAP for object detection and 0.852 mIoU for drivable area segmentation and 0.402 mIoU for lane line segmentation. These findings demonstrate the effectiveness of the QAT strategy in boosting the performance of quantized network, as compared to the Post-Training Quantization strategy. § CONCLUSION In this work, we have successfully implemented a light-weighted object detection and segmentation model. To improve its efficiency, we explored the effectiveness of two techniques: quantization-aware training and mix data finetuning (i.e. the third stage). Through extensive experimentation, we have demonstrated the effectiveness of these techniques in improving the accuracy and efficiency of our model. Our final model has achieved competitive results on the target dataset, demonstrating its potential for real-world applications. IEEEbib
http://arxiv.org/abs/2307.06118v1
20230712121936
TreeFormer: a Semi-Supervised Transformer-based Framework for Tree Counting from a Single High Resolution Image
[ "Hamed Amini Amirkolaee", "Miaojing Shi", "Mark Mulligan" ]
cs.CV
[ "cs.CV", "cs.AI" ]
TreeFormer: a Semi-Supervised Transformer-based Framework for Tree Counting from a Single High Resolution Image Hamed Amini Amirkolaee,  Miaojing Shi^*, Senior Member, IEEE, Mark Mulligan ^*Corresponding author Hamed Amini Amirkolaee is with the Department of Informatics, King's College London, London WC2B 4BG, U.K. E-mail: [email protected]. Miaojing Shi is with the College of Electronic and Information Engineering, Tongji University, Shanghai, 20092, China. E-mail: [email protected]. Mark Mulligan is with the Department of Geography, King’s College London, London WC2B 4BG, U.K. E-mail: [email protected]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Automatic tree density estimation and counting using single aerial and satellite images is a challenging task in photogrammetry and remote sensing, yet has an important role in forest management. In this paper, we propose the first semi-supervised transformer-based framework for tree counting which reduces the expensive tree annotations for remote sensing images. Our method, termed as TreeFormer, first develops a pyramid tree representation module based on transformer blocks to extract multi-scale features during the encoding stage. Contextual attention-based feature fusion and tree density regressor modules are further designed to utilize the robust features from the encoder to estimate tree density maps in the decoder. Moreover, we propose a pyramid learning strategy that includes local tree density consistency and local tree count ranking losses to utilize unlabeled images into the training process. Finally, the tree counter token is introduced to regulate the network by computing the global tree counts for both labeled and unlabeled images. Our model was evaluated on two benchmark tree counting datasets, Jiangsu, and Yosemite, as well as a new dataset, KCL-London, created by ourselves. Our TreeFormer outperforms the state of the art semi-supervised methods under the same setting and exceeds the fully-supervised methods using the same number of labeled images. The codes and datasets are available at https://github.com/HAAClassic/TreeFormer. Tree counting, semi-supervised model, transformer, pyramid learning strategy, remote sensing. § INTRODUCTION Trees are the pulse of the earth and are vital organisms in maintaining the ecological functioning and health of the planet <cit.>. Tree counting using high-resolution images is useful in various fields such as forest inventory <cit.>, urban planning <cit.>, farm management <cit.>, and crop estimation <cit.>, making it important in photogrammetry, remote sensing, and nature-based solutions to environmental change <cit.>. Counting trees using traditional methods such as field surveys based on quadrats is very time-consuming and expensive <cit.>. Therefore providing an automatic method in this field can be very helpful and practical <cit.>. High-resolution aerial and satellite images <cit.> and light detection and ranging (LiDAR) <cit.> data are the most important sources for tree detection and counting. 3D LiDAR data along with 2D aerial and satellite images can be very effective to achieve accurate results <cit.>. On the other hand, collecting and preparing aerial and satellite images is much less expensive than LiDAR data which makes it worth presenting an automatic method for tree counting using a single high-resolution image <cit.>. In the last decade, artificial intelligence and especially deep learning have developed greatly and achieved significant success in the field of remote sensing <cit.>. The lack of 3D information in aerial and satellite images makes it difficult to identify and distinguish trees, while the ability of deep neural networks (DNNs) in extracting and distinguish of the geometric and textural features of trees has made this feasible <cit.>. Although the supervised learning methods based on DNNs have achieved promising performance in tree counting <cit.>, a large number of trees must be labeled (in the form of points or bounding boxes) to train these networks, which is very costly and time-consuming, especially for areas where trees are very dense. To solve this problem, a semi-supervised strategy is desirable, in which a limited number of labeled images and a large number of unlabeled images are utilized. Apart from training the model on the labeled data, the main purpose of semi-supervised learning is to design efficient supervision for unlabeled data to include them into the model training <cit.>. The state of the art solutions can be mainly categorized into two classes: pseudo-labeling and consistency regularization. In the first class, the model is trained using the labeled data and is used to generate pseudo labels for unlabeled data. The pseudo labels are then included into the model training for unlabeled data <cit.>. In the second class, the model is trained on both labeled and unlabeled data using a supervised loss and a consistency loss, respectively. The supervised loss is task-related while the consistency loss is normally applied as a regulator to force the agreement between results obtained from differently-augmented unlabeled images <cit.>. In semi-supervised object counting <cit.>, a ranking constraint is often employed to investigate the count relations between the super- and sub-regions of an image. In this paper, we for the first time propose a semi-supervised framework for tree counting, namely TreeFormer. It is built upon a transformer structure. In recent years, the transformer has attracted a lot of attention in our community and has had very promising results in many visual tasks <cit.>. This is due to their strong capacity to aggregate local information using self-attention and propagate representations from lower to higher layers in the network. We base our network encoder on a pyramid vision transformer (PVT) <cit.> to extract robust multi-scale features. A contextual attention-based feature fusion module is introduced to utilize these features in the network decoder. We develop the decoder to produce pyramid predictions by adding a tree density regressor module after each scale feature. In addition, we notice the CLASS token in the PVT gathers global information from all patches for image classification <cit.>. Inspired by it, we design a new tree counter token to estimate the global tree count at each scale of our network encoder. Our network optimization follows a pyramid learning strategy, pixel-level, region-level, and image-level learning. For labeled data, the estimated tree density maps are compared with ground truth using pixel-level distribution matching loss. To effectively leverage the unlabeled data, we introduce two region-level losses: local tree density consistency loss and local tree count ranking loss. The local tree density consistency loss is proposed to encourage the tree density predictions from the same local region over different scales to be consistent for a given input. In order to encourage the invariance of the model’s predictions, different scales are perturbed with noises. The local tree count ranking loss is proposed to control the tree numbers in different local regions of the tree density map so that a super-region contains equal or more trees than its sub-region in an image. Finally, the network is optimized on the image-level multi-scale tree counts predicted by the tree counter tokens. For a labeled image, these predictions are directly compared with the ground truth tree count. For an unlabeled image, we average these predictions to serve as a global pseudo supervision, which encourages the multi-scale outputs to be close for the same image. In summary, the main contribution of this work, TreeFormer, is threefold: * For network architecture, a pyramid tree feature representation module is employed for the encoder and a contextual attention-based feature fusion module is designed to utilize the pyramid features for the decoder. A tree density regressor module and tree counter token are introduced to predict the tree density map and global tree count at each scale, respectively. * For network optimization, a pyramid learning strategy is designed. Specifically, a scheme of learning from unlabeled images using local tree density consistency and local tree count ranking losses on the region-level is emphasized; an image-level global tree count regularization based on the global predictions from tree counter tokens is also highlighted. * For network benchmarking, we create a new tree counting dataset, KCL-London, from London, UK. This dataset contains 921 high-resolution images that are gathered by manually digitizing Google Earth imagery. Individual tree locations in the images are manually annotated. We conduct extensive experiments on three datasets, Jiangsu <cit.>, Yosemite <cit.> and KCL-London. Our method outperforms the state of the art significantly. § RELATED WORKS We survey the related works in two subsections: object counting and tree counting. §.§ Object counting Object counting methods have been used in various fields such as human crowds, car <cit.>, cells <cit.> and, trees <cit.>. The challenges of object counting include scale variation, severe occlusions, appearance variations, illumination conditions, and perspective distortions <cit.>. Many methods proposed in the field of object counting are related to crowd counting <cit.>. Below we discuss these methods in two parts including fully supervised and partially supervised methods. §.§.§ Fully supervised methods These methods usually convert the point-level annotations of object centers into density maps using Gaussian kernels and utilize them as ground truth. They achieve good performance via training with a large amount of annotated data. In order to solve the challenge of scale variation in crowd counting, multi-column/-scale networks are popular architectures to choose <cit.>. The visual attention mechanism is also effective to address the problem of scale variation and background noise in crowded scenes <cit.>. In addition, employing auxiliary tasks such as localization <cit.>, classification <cit.>, and segmentation <cit.> are useful to improve the counting performance. §.§.§ Partially supervised methods Recently, researchers tried to reduce the need for labeled training data by developing weakly/semi-supervised methods. Semi-supervised methods alleviate the annotation burden by using additional unlabeled data that can help achieve high accuracy with a smaller number of labeled data only. For instance, Liu  <cit.> introduced a pairwise ranking loss to estimate a density map using a large number of unlabeled images. Wang <cit.> reduced the need for annotation by combining real and synthetic images. Another strategy is to estimate pseudo labels of unlabeled images and use them in a supervised network to improve the accuracy of the results <cit.>. Recently, Zhao <cit.> proposed an active labeling strategy to annotate the most informative images in the dataset and learn the counting model upon both labeled and unlabeled images <cit.>. Sam <cit.> presented a stacked convolution autoencoder based on the grid winner-take-all paradigm in which most of the parameters can be learned with unlabeled data. Weakly-supervised methods aim to use global counts instead of point-level annotations for model learning <cit.>. For example, Yang <cit.> presented a weakly-supervised counting network, which directly regresses the crowd numbers without location supervision. They utilized a soft-label sorting network along with a counting network to sort images according to their crowd numbers. §.§ Tree counting Counting trees in the dense tree canopy where trees are very close and sometimes interlocking becomes much more difficult than counting other objects such as humans, cars, cells, In other words, trees can be in continuous form from the top view and their separation using a single image is very complex. Traditionally, the area where trees exist is detected, then algorithms such as region growing <cit.>, watershed segmentation <cit.>, and template matching <cit.> are used to segment and count trees. In these methods, suitable features are selected and produced by analyzing the spectral, textural, and geometrical characteristics of trees. The accuracy of these methods is dependent on the strength of the handcrafted features that were manually engineered by researchers. In regions with dense and complex tree cover, their accuracies are not satisfactory. Recently, the successful performance of deep neural networks (DNNs) in object detection <cit.> has inspired researchers to adapt these algorithms for the detection and counting of trees. In these networks, suitable features are automatically learned by the network. The widely used DNNs for tree counting are either based on detection <cit.> or density estimation <cit.>. §.§.§ Detection-based methods These methods count the number of trees in each image by identifying and localizing individual trees with bounding boxes. Machefer  <cit.> utilized a Mask R-CNN for tree counting from unmanned aerial vehicle (UAV) images. They focused on low-density crops, potatoes, and lettuce, and employed a transfer learning technique to reduce the requirement for training data. Zheng  <cit.> presented a domain adaptive network to detect and count oil palm trees. They employed a multi-level attention mechanism including entropy-level attention and feature-level attention to enhance the transferability of different domains. Weinsteinl  <cit.> produced an open-source dataset for tree crown estimation at different sites across the United States. They show that deep learning models can leverage existing LiDAR-based unsupervised delineation to generate training data for a tree detection network <cit.>. Ammar   <cit.> compared the performance of different networks such as Faster R-CNN, YOLOv3, YOLOv4, and EfficientNet for the automated counting and geolocation of palm trees from aerial images. Lassalle <cit.> combined a DNN with watershed segmentation to delineate individual tree crowns. §.§.§ Density estimation based methods The performance of the detection-based methods is unsatisfactory when encountering the situation of occlusion and background clutter in extremely dense tree regions. The density estimation-based methods learn the mapping from an image to its tree number which avoids the dependence on the detector and often has higher performance. A density map is normally produced by convolving a Gaussian function with specified neighborhood size and sigma at every annotated tree location in an image. The integral of the density map is equal to the number of trees in the image. Chen and Shang <cit.> combined a convolutional neural network (CNN) and transformer blocks to estimate the density map. Osco <cit.> employed a DNN to estimate the number of citrus trees by predicting density map from UAV multispectral imagery. They also analyzed the effect of using near infrared band on the achieved results. Yao <cit.> constructed a tree counting dataset using four GF-II images and utilized a two-column DNN based on VGGnet and Alexnet for tree density estimation. Liu <cit.> proposed a pyramid encoding–decoding network, which integrates the features from the multiple decoding paths and adapts the characteristics of trees at different scales. In general, there is not much research on tree density estimation, even though they mainly use common and basic networks in deep learning <cit.>. Also, the existing algorithms in this field are supervised methods, while it is vital to provide a semi-supervised method with an efficient structure due to the lack of annotated training data in this field. § DATA SOURCE §.§ Area The area is focused on London, the United Kingdom Collating data about London's urban forest is challenging due to the number of landowners and managers involved. This city contains trees with different types, sizes, shapes, and densities which are challenging to detect and count using traditional remote sensing approaches. Some trees are isolated on streets and others are together in small recreational areas or large areas of ancient forested parkland. Backgrounds are sometimes pavement and sometimes grassland, water, or other trees. London also has different tree species such as Apple, Ash, Cherry, Hawthorn, Hornbeam, Lime, Maple, Oak, Pear, which have different canopy shapes and characteristics. In addition to the above varieties of trees, there are also trees with different arrangements in the areas of the city. For example, in central areas of the city, trees have a low density and are located at a greater distance from each other; while the density of trees is very high at the edge of the city. §.§ Labels The required high-resolution images are gathered and stitched together from Google Maps at 0.2 m ground sampling distance (GSD). The gathered images are divided into images with 1024 × 1024 pixels. To aid the identification of tree locations and numbers of selected images, we employed the accessible tree locations of London in London Datastore website[https://data.london.gov.uk/dataset/local-authority-maintained-trees]. Although these data show the locations and species information for over 880,000 of London's trees, the data mainly contains information on trees in the main streets and does not cover trees that are dense between houses or parks. We manually annotated the latter. To this end, Global Mapper as geographic information system software is used to annotate the center of each tree. The tree labels are rasterized and converted to JPG format with a resolution compatible with the image data. §.§ Characteristics The prepared dataset, termed as KCL-London, consists of 613 labeled and 308 unlabeled images. 95,067 trees were annotated in total in the labeled images. The tree number in these images varies from about 4 in areas with sparse covers to 332 in areas with dense covers. These images are gathered from different locations that represent a range of different areas across London. In Fig. <ref> the selected locations of prepared images with annotations are presented. § METHODOLOGY §.§ Overview In this paper, a semi-supervised framework is proposed to estimate the density map of trees from a remote sensing image. An overview of the designed framework is presented in Fig. <ref>. Our network has an encoder-decoder architecture based on transformer blocks. A pyramid tree feature representation (PTFR) module is developed in the encoder to extract multi-phase features from the input image (Sec. <ref>). A contextual attention-based feature fusion (CAFF) module is introduced to utilize the pyramid features in the decoder (Sec. <ref>). Afterwards, the tree density map is estimated in each scale of the decoder using the designed tree density regressor (TDR) module (Sec. <ref>). Besides, a tree counter token (TCT) is proposed to compute the number of trees in each phase of the encoder (Sec. <ref>). For the labeled data, a supervised distribution matching loss is employed to train the network (Sec. <ref>). The same architecture with shared parameters is used for unlabeled data, while the proposed local tree density consistency and local tree count ranking losses are utilized to assist the network to achieve more accurate results (Sec. <ref>). A global tree count regularization that optimizes the global tree count predictions from the tree counter tokens is applied to both labeled and unlabeled data (Sec. <ref>). The loss functions used for labeled and unlabeled data are applied to the pyramid estimations of the proposed model. §.§ TreeFormer framework In this section, we introduce the pyramid tree feature representations and the tree counter tokens for the encoder of our TreeFormer; the contextual attention-based feature fusion modules, and the tree density regressor modules for the decoder of our module. They are also illustrated in Fig. <ref> in details. §.§.§ Pyramid Tree Feature Representation We develop the PTFR based on the pyramid vision transformer (PVT) <cit.> to effectively extract multi-phase features in the encoding process. The PVT divides the image into 4×4 non-overlapping patches as input. The PTFR is obtained by applying convolutional layers with different strides in each phase of the PVT. Suppose W and H represent the width and height of the input image, a set of feature maps including phase 1: W/4×H/4×128, phase 2: W/8×H/8×256, phase 3: W/16×H/16×512, phase 4: W/32×H/32×1024, are generated in PTFR. The achieved feature map in each phase is both fed to the CAFF module, specified next, and utilized as input (half-sized) for the next phase (Fig.<ref>a). Notice following we half the resolution of the feature map while double the number of channels at each scale. In the i-th phase, as illustrated in Fig. <ref>b, the input image is divided into W/2^i+1×H/2^i+1 patches which are fed to a linear projection layer and a normalization layer for patch embedding. The obtained patch feature maps are flattened into vectors and added with the position embedding before they are passed through a transformer encoder. The output is reshaped to one feature map. The transformer encoder is composed of a spatial-reduction attention layer to reduce the spatial scale of keys and values before the multi-head attention operation and a feed-forward layer <cit.>. §.§.§ Contextual Attention-based Feature Fusion We design CAFF to utilize the robust multi-scale features collaboratively in the decoder in a pyramid pattern: as illustrated in Fig.<ref>c, a coarser-resolution feature map from the previous scale of the decoder and a finer-resolution feature map from the earlier phase of the encoder are fed to a CAFF module; while the output of this CAFF module and the next finer-resolution feature map from the encoder will be fed to the next CAFF module until the final feature maps are produced (W/4×H/4). In a word, the generated features are incrementally refined in the decoder, and this leads to stronger and more effective tree density estimation. In each CAFF module, as illustrated in Fig. <ref>c, first, a bilinear interpolation layer is used to upsample a coarser-resolution feature map from the previous scale of the decoder (S_i+1). A series of convolutional, batch normalization, and ReLU layers are applied to extract tree relevant information from both inputs (S_i+1, S_i). A channel attention (CA) block is devised on the finer-resolution branch (S_i) which consists of an average pooling and two fully-connected (FC) layers with a ReLU between them; a sigmoid function is added by the end. Inspired by <cit.>, the CA block computes a channel-wise importance vector which is used to multiply with the feature map, so that the tree relevant channels in the feature map are highlighted. The re-weighted feature map is finally added with the coarser-resolution feature map (S_i+1) from the encoder to generate a robust feature map for tree density estimation. §.§.§ Tree Density Regressor The purpose of the TDR module is to estimate the tree density map. The TDR module has been used in three different scales of the decoder to generate tree density maps (D_1, D_2, and D_3 in Fig. <ref>a). The specified scale factor for upsampling the feature maps in the TDR (see Fig. <ref>d) is set to 1, 2, and 4 respectively for generating the same size of feature maps over scales in the decoder. Afterward, the block of convolutional, batch normalization, and ReLU layers is applied to reduce the number of feature channels and achieve the final density map in each scale. We let every block be responsible for reducing half of the channels (for 128 channels, it is reduced to 1). The original number of feature channels in the first, second, and third decoding scales is 128, 256, and 512, respectively. Hence, we set the number of blocks (τ in Fig. <ref>d) in the first, second, and third scales to 1, 2, and 3 correspondingly. The TDR module is also responsible for perturbing the multi-scale feature maps so that local tree density consistency loss, specified later, will be applied to enforce consistency over multiple density predictions. It applies a perturbation layer before the upsampling layer in the TDR. Given a feature map F, we specifically choose three types of perturbations including feature perturbation, masking, and spatial dropout from <cit.> corresponding to D_1, D_2 and D_3 in Fig. <ref>a. * Feature perturbation: a noise tensor ξ∼ U(-0.3, 0.3) of the same size as F is uniformly sampled. The noise is injected into F after adjusting the noise amplitude by element-wisely multiplying the noise with F, F̃ = (F⊙ξ) + F. * Feature masking: The sum of F over channels is computed and normalized as F'. A mask (M_drop) is generated by determining a threshold (ε∼ U(0.7, 0.9)) and applying it to F^', M_drop=F' ≤ε. The masked feature map is computed by multiplying M_drop to F, F̃ = F ⊙ M_drop. In this way, between 10%to 30% of the most active regions in the feature map are masked. * Spatial dropout: The dropout is applied across the channels of F. In other words, some channels are set to zero (dropped-out) and others are activated <cit.>. §.§.§ Tree Counter Token The purpose of the TCT module is to compute the number of trees from Phase 2 to 4 of the encoder (Fig. <ref>a and b). In the i-th phase, the result of the patch embedding is reshaped to a stack of vectors, 𝐟 = [f_1, f_2, ..., f_ρ], ρ=2^2(i+1), where each f is a 1 × C_i dimensional feature vector corresponding to a local region. We introduce an additional tree counter token (f_T) appended to 𝐟, 𝐟 = [f_1, f_2, ..., f_ρ, f_T]. These vectors are added with positional embedding and passed through a spatial reduction and multi-head attention blocks in the transformer encoder. Through the encoding process, f_T aggregates the tree density information from the rest feature vectors in 𝐟 before it is fed to the TCT module to calculate the total number of trees. In the TCT module, as illustrated in Fig. <ref>e, the tree count is estimated after applying the aforementioned perturbation layer and a convolutional layer. Since here the input of the perturbation layer is a vector instead of a matrix, the feature masking is performed similarly to the spatial dropout. The difference is that the spatial dropout randomly sets some channels to be zero, while the feature masking selects some of the most active channels to be zero according to the ε. §.§ Pyramid Learning strategy We design a pyramid learning strategy that consists of three levels such as pixel-level, region-level, and image-level learning to train the TreeFormer. Analyzing the results obtained at different levels of details can increase the accuracy in a coarse-to-fine manner. At the pixel level, the distribution matching loss is used as a supervised loss to evaluate the results for labeled data. At the region level, two losses including local tree density consistency and local tree count ranking are proposed for unlabeled data. At the image level, the total number of trees is estimated by the TCT for learning both labeled and unlabeled data. To clarify, the pyramid learning is not a multi-stage learning but is an end-to-end learning. Pyramid means that the loss functions are defined on different levels of the input while all loss functions are optimized simultaneously. §.§.§ Pixel-level learning To optimize the crowd density at the pixel level, the distribution matching loss is utilized <cit.>. This loss function is based on the combination of the counting loss, optimal transport loss, and total variation loss. The counting loss (L_c) calculates the difference between the estimated and ground truth tree density value at the pixel level: L_c = ∑_k=1^K|‖ D_k‖ - ‖ D_gt‖| where K is the number of scales in the decoder, K =3; D_k is the estimated density map at a certain scale and D_gt is the corresponding ground truth. ‖ . ‖ denotes the L1 norm to accumulate the density values in D_k or D_gt. The optimal transport loss (L_ot) calculates the difference between the distribution of the normalized density function of the estimated density map and ground truth <cit.> as follows: L_ot= ∑_k=1^K W(D_k/‖ D_k ‖,D_gt/‖ D_gt‖) where W is the optimal transport cost referred to <cit.>. Finally, the total variation loss L_tv is used to stabilize the training procedure, defined as below: L_tv=∑_k=1^K1/2‖D_k/‖ D_k ‖-D_gt/‖ D_gt‖‖ It is used to alleviate the L_ot's poor approximation in the low-density areas. Accordingly, the overall distribution matching loss for pixel-level learning is formulated as: L_dm= α _1 L_c+α _2 L_ot+α _3 L_tv where α _i is the weight value and is set to 1, 0.1, and 0.01 for α _1, α _2, and α _3, respectively <cit.>. §.§.§ Region-level learning Our proposed loss function for region-level learning has two parts, local tree count ranking loss and local tree density consistency loss. To implement them, the super- and sub-regions are cropped from the estimated density maps. The cropped regions have the same center and aspect ratio as the original one. They are cropped by reducing their size iteratively by a scale factor of 0.75. Below we introduce our loss function upon these regions. Local tree count ranking. This learning strategy serves as a self-supervised function that is used for unlabeled images. (Fig. <ref>b). Inspired by <cit.>, the number of trees in a super-region is bigger than or at least equal to that of trees in its sub-regions. The network learns the ordinal relation of the cropped density maps by applying a ranking loss: γ = max(0, ϑ(d_m)-ϑ(d_n)) where d_n and d_m are the cropped super- and sub-regions from the estimated density map of an unlabeled image, respectively. ϑ sums the density values in a region, which signifies the number of estimated trees in this region. According to Eq. <ref>, γ will be zero when the ordinal relation is correct. We propose a multi-scale structure so that the ranking loss is adopted in the estimated density map of each scale of the decoder (Fig. <ref>). The loss for each unlabeled image is computed by: L_rank=∑_k=1^K ∑_m=1^M-1∑_n=m+1^M max(0, ϑ(d_m,k)-ϑ(d_n,k)) where M is the number of cropped patches from a density map and K is the number of scale in the decoder. Local tree density consistency. The purpose of this strategy is to minimize the discrepancy between predictions at different scales after applying a perturbation to each scale (Fig. <ref>c). Since we do not have the ground truth, we use the mean prediction over different scales of the decoder as the pseudo ground truth. We compute the Kullback–Leibler (KL) divergence between the mean prediction and the prediction at each scale to enforce the network to minimize this distance: L_consis= ∑_k=1^K∑_m=1^M∑_i=1^w∑_j=1^hd_m,k(i,j) · log d_m,k(i,j)/d_ave(i,j) where d_m,k is a certain cropped region from the density map of the k^th scale while d_avg=1/K∑_k=1^K d_m,k; i and j represent the position of a pixel in the cropped density map with a size of w× h, respectively. Notice we use the same set of cropped regions as for the local tree count ranking. Yet, consistency is applied between the same density regions over different decoding scales. §.§.§ Image-level learning The predicted total numbers of trees from TCT modules at different encoder phases are utilized to optimize the network parameters using both labeled and unlabeled data (Fig. <ref>d). Global tree count regularization. For labeled data, the estimated values by TCTs over three phases, {t_1^l, t_2^l, t_3^l}, are compared with the total number of trees, t^l_gt, in the ground truth. For unlabeled data, since the ground truth is unavailable, the average of the estimated count values, {t_1^u, t_2^u, t_3^u}, t_avg^u=1/K∑_k=1^K t_k^u, is used as pseudo ground truth to supervise the training. The image-level loss functions for labeled (L_ts) and unlabeled images (L_tu) are therefore defined by: L_ts=∑_k=1^K‖ t_k^l - t_gt^l‖ L_tu=∑_k=1^K‖ t_k^u - t_avg^u‖ §.§.§ Training loss Overall, the loss function for the labeled images is based on the summation of the L_dm and L_ts (L_s=L_dm+L_ts). The loss function for the unlabeled images comprises three components including L_consis, L_rank and L_tu (L_u=L_consis+L_rank+L_tu). The final loss is the combination of L_s and L_u with a hyperparameter λ. L_=L_s+λ L_u § EXPERIMENTS §.§ Datasets §.§.§ KCL-London dataset This dataset, as specified in Sec. <ref>, contains high-resolution images with 0.2m GSD from London that is divided into two parts including 613 labeled and 308 unlabeled images (Fig. <ref>). Within the labeled set, we separate it into 452 samples for training and 161 samples for testing. The unlabeled set can be optionally used. §.§.§ Jiangsu dataset This study area contains 24 Gaofen-II satellite images with 0.8m GSD which are captured from Jiangsu Province, China for training and testing <cit.>. There are 664,487 trees that are manually annotated across 2400 images. The images cover different landscapes such as cropland, urban residential area, and hill. This dataset is divided into a training set that contains a total of 1920 images, and a test set that contains 480 images. §.§.§ Yosemite dataset This study area is centered at Yosemite National Park, California, United States of America <cit.>. A rectangular image with 19,200 × 38,400 pixels and 0.12m GSD that is collected from Google Maps and consists of 98,949 trees which are manually annotated. This data is divided into training (1350 images) and test data (1350 images). The characteristics of the study areas for different datasets are presented in Table. <ref>. §.§ Evaluation Protocol and Metrics To set up for the semi-supervised experiments, we divide the training set of each data set into 10% 90% and 30% 70% for labeled and unlabeled subsets, respectively. We refer to the two settings as default setting 1 and 2. Notice in the KCL-London dataset, there are also 308 additional unlabeled images (no annotations at all), they can also be used if specified. For the sake of convenience, we give the notations for different sets in each dataset: first, we denote by 𝒟_tr and 𝒟_te the training and test set, respectively; and 𝒟_ltr and 𝒟_utr the labeled and unlabeled subset within 𝒟_tr; finally, 𝒟_au the additional unlabeled set in the KCL-London dataset. Following <cit.> we use three criteria including mean absolute error (E_MAE), root mean squared error (E_RMS), and R-Squared (E_R2) to evaluate results. They are defined as follows: E_MAE=1/N∑ _i=0^N |y_i^e-y_i^gt| E_RMS=√(1/N∑ _i=0^N (y_i^e-y_i^gt)^2) E_R^2=1-∑ _i=0^N (y_i^e-y_i^gt)^2/∑ _i=0^N (y_i^e-y̅^gt)^2 where N denotes the number of samples, y_i^e represents the estimated tree number for the i-th sample, y_i^gt is the corresponding ground truth tree number and y̅^gt is the mean ground truth tree number over samples. In general, lower E_RMS and E_MAE values and higher E_R2 indicate better performance. Besides E_RMS and E_MAE that only consider the global count at the sample (image) level, we also follow to employ the grid average mean absolute error (GAME) to analyze the performance of the proposed model at the region-level. GAME typically has four levels including E_G0, E_G1, E_G2, and E_G3. For a specific level L, we subdivide the image into 4^L non-overlapping regions, and the estimated tree number is compared with the ground truth tree number in each sub-region: E_GL=1/N∑ _i=0^N ∑ _l=1^4^L |y_i,l^e-y_i,l^gt| where y_i,l^e is the estimated tree number in the l-th sub-region of the i-th image while y_i,l^gt is the corresponding ground truth. When L increases, the number of subdivided regions increases and the evaluation becomes more subtle. Moreover, we also follow to employ the Precision (E_P), Recall (E_R), and F1-measure (E_F1) to assess the performance of the proposed model at the pixel level. They are calculated based on the number of true positives, false positives, and false negatives obtained from the comparison between the predicted density map and the ground truth density map pixel-wisely. We use E_G0, E_G1, E_G2, E_G3, E_P, E_R, and E_F1 only for the ablation study to demonstrate the tree localization accuracy by using our proposed region-level and pixel-level optimization strategies. §.§ Implementation Details We build an encoder-decoder architecture in that the encoder is based on a transformer with four phases. The parameters of the transformer are set according to <cit.>. The decoder estimates three-scale density maps (Sec. <ref>). The number of channels in these scales is 128, 256, and 512 after applying the CAFF modules. The τ value in TDR (Fig. <ref>d) is set to 1, 2, and 3 for the first, second and third scales of the decoder. We augment the training set using horizontal flipping and random cropping <cit.>. Also, we randomly crop the images with a fixed size of 256×256 as the input of the network. The number of the epoch, batch size, learning rate, and weight decay are set to 500, 16, 10^-4, and 10^-5, respectively. The Adam optimizer is used. All parameters are tuned on the KCL-London dataset and utilized for all experiments. The ground truth contains the coordinates of tree locations which are specified by annotations dots. We follow <cit.> to generate the ground truth density maps from the tree locations using Gaussian functions. §.§ Comparisons with state of the art In this section, the purpose is to evaluate the performance of the proposed TreeFormer with state of the art models. We categorize comparisons into two groups including semi-supervised ones and supervised ones. Semi-supervised models: Comparable models in the semi-supervised group are all trained under our default setting. To this end, four state of the art semi-supervised methods including cross-consistency training (CCT) <cit.>, mean-teacher (MT) <cit.>, interpolation consistency training (ICT) <cit.>, and learning to rank (L2R) <cit.> are selected. These methods were not originally proposed for tree counting while we adapt them into our task for comparison. For instance, the CCT, ICT, and MT were originally proposed for the image classification task while we adapt them to predict density maps and change their image-level classification consistency loss to our proposed local tree density consistency loss. The L2R was originally proposed for crowd counting and we transfer it to tree counting; L2R only uses a single ranking loss on the final prediction while our local tree count ranking loss is defined over multiple perturbed intermediate scales of the decoder. For the comparison, we use 10% or 30% of training data as 𝒟_ltr while the rest as 𝒟_utr. To make a fair comparison, the same transformer blocks are used as the backbone for comparable methods. Table <ref> shows that our TreeFormer significantly outperforms others under the same level of supervision. For instance, on the KCL-London dataset, for 10% and 30% labeled data, we observe a decrease of 2.87 and 3.60 for E_MAE, 3.78 and 4.55 for E_MSE, and an increase of 0.15 and 0.11 for E_R^2 from TreeFormer to CCT. On the Jiangsu dataset, our model also has 18.54 and 9.35 decreases of E_MAE to the previously best-performed model CCT using 10% and 30% labeled data, respectively. The same observation also goes for the Yosemite dataset. Overall, our model produces the lowest errors on the Yosemite dataset amongst all datasets. We believe the reason lies in the simple image characteristics obtained in this study area (see Table <ref>). In the Yosemite dataset, the background and trees are very different, which makes tree identification easier. While in the KCL-London and Jiangsu datasets, there are various objects such as buildings, cars, vegetation, , which makes the identification and counting of trees challenging. Also, in the Jiangsu dataset, the lower resolution of the images compared to KCL-London has reduced the accuracy of its results. In Fig. <ref>, we show some qualitative results of our method compared with other semi-supervised methods. Supervised models: To further investigate the effectiveness of our model, we evaluate the proposed model in the case of supervised training and compare it with existing methods (Table <ref>). In this scenario, the entire 𝒟_tr is assumed labeled for training. We denote the supervised version of our method by S-TreeFormer, which has the same backbone as the original TreeFormer. The DM loss and global tree count regularization is still used in the supervised form. The local tree count ranking and local density consistency are however no longer used. The comparable methods include SASNet <cit.>, FusionNet <cit.>, EDNet <cit.>, Swin-UNet <cit.>, CSRNet <cit.>, MCNN <cit.>, DENT <cit.>, and TreeCountNet <cit.>. Specifically, the SASNet, CSRNet, MCNN, and FusionNet are state of the art crowd counting methods, we reproduce them in the tree counting task. The Swin-UNet is based on the transformer architecture and the others are based on convolutional architecture. The DENT employs a convolutional architecture for extracting the feature maps from the input image. Then a transformer encoder is used to model the interaction of the extracted features and estimate the tree density map. In the experiment of the KCL-London and Yosemite datasets, our model achieves the highest accuracy. For the Jiangsu dataset, our model obtains the lowest E_MAE and E_MSE, while the TreeCountNet achieves a slightly higher E_R^2 (0.01). In Fig. <ref>, we show some qualitative results of our method compared with other supervised methods. In the last row of Table <ref>, the performance of the proposed TreeFormer is presented when the additional unlabeled images in 𝒟_au are used along with all labeled images in 𝒟_tr for network training. Using unlabeled data can further reduce the value of E_MAE and E_MSE by 1.82 and 1.34, respectively. Finally, the number of parameters, FLOPS, and inference time of the proposed TreeFormer are compared with other semi-supervised methods in Table . To make a fair comparison, we set the batch size to 1 for all methods on the KCL-London dataset. According to the results in Table , CCT achieves the second best performance in general, yet it has clearly consumed more FLOPS, parameters, and inference time than the proposed TreeFormer. ICT in general has the lowest computation cost, yet ours compared to ICT is not significantly different. Note that MT and ICT have the same basic architectures, hence their corresponding values in Table are the same. Also, the same observation goes for our TreeFormer and its supervised version S-TreeFormer. §.§ Ablation Study We analyze TreeFormer on the KCL-London dataset by ablating its proposed components to evaluate their effects on the model accuracy. The ablation study is operated on our default semi-supervised setting 2, 30% labeled images 70% unlabeled images. §.§.§ Analysis on model architecture In this section, we investigate the proposed PTFR, CAFF and TDR modules. PTFR module. The pyramid structure of the PTFR can be downgraded by reducing the number of phases of the encoder from 4 into 2 (phases 1 and 2 in Fig. <ref>a) so that only one scale is produced in the decoder. This phase reduction increases E_MAE and E_RMSby 6.78 and 9.80 and reduces the value of E_R^2 by 0.28 compared to the original TreeFormer. CAFF module. To investigate the effect of the CAFF module, we present the result without using it (w/o CAFF) in Table <ref>. It shows that the error value of E_MAE is increased by 5.08 when the CAFF module is removed. Next, we devise another variant, using CAFF without channel attention (CAFF w/o CA), which increases the E_MAE by 3.92 and E_MSE by 7.99. In addition, the effect of changing the channel attention layer to the spatial attention layer (CAFF w/ SA) and using both channel and spatial attention blocks simultaneously (CAFF w/ SA+CA) has also been reported in Table <ref>. Accordingly, replacing channel attention with other ways has reduced the accuracy of the results. TDR module. First, the number of blocks of Conv, BN, and ReLU layers in the TDR module (τ in Fig. <ref>d) is studied for different scales. In Table <ref>, in the first case, one block of these layers (τ=1) is utilized for all three scales to reduce the channel numbers and calculate the density maps. In the second and third cases, τ was set to 2 and 3 for all scales, respectively. In the fourth case, τ was set to 1, 2, and 3 for the first, second, and third scales, respectively (see Fig. <ref>a). The result show that the fourth case achieves the best results. According to Sec. <ref>, the fourth case is also the best choice theoretically. Moreover, we analyze the selection of perturbations including feature perturbation (P_1), feature masking (P_2), Dropout (P_3), and for estimating the tree density maps. By default we use P_1, P_2, and P_3 for D_1, D_2, and D_3 (Fig.<ref>a), respectively. Applying the mentioned perturbations has different effects on different scales due to the type of change they produce on the feature maps. For instance, applying P_1 to D_3 would result into more noise than to D_2 and D_1, because the resolution of D_3 (before upsampling) is smaller than that of D_2 and D_1. Altering too much information in a scale causes network performance drop. Hence, we specifically design P_1, P_2, and P_3 to suit the scale D_1, D_2, and D_3 from fine to coarse. We compare it to random order or other specific orders of perturbations in Table <ref>. The results show that the order P_1, P_2, P_3 works the best. §.§.§ Analysis on learning strategy We introduce a pyramid learning strategy that consists of three levels such as pixel-level, region-level, and image-level learning. Pixel-level learning. To verify that the designed strategy is effective, we utilize the L2 loss instead of the DM loss (w/ L2). Table <ref> shows that using L2 increases the E_MAE by 18.62 and E_RMS by 25.64. Also, the E_P, E_R, and E_F1 as the pixel-level localization metrics exhibit a reduction of 27.86%, 45.75%, and 38.55%, respectively. Region-level learning. Investigating the performance of the proposed TreeFormer without the local tree density consistency (w/o LTC) indicates an increase of E_MAE by 4.30 and E_RMS by 6.50 (Table <ref>). Furthermore, analyzing the computed region-level metrics shows that the E_G1, E_G2, and E_G3 are increased by 5.08, 4.63, and 2.84, respectively. The consistency is applied over different cropped regions of the image. If we only apply it on the single image level (w/ STC), the performance will also be improved compared to that w/o LTC. However, applying consistency on cropped regions clearly leads to better accuracy. LTC employs the KL divergence to measure the distance between the obtained density distribution from unlabeled images and that from the ground truth. A variant is to use the Jensen-Shannon (JS) divergence, which measures the total distance from any one distribution to the average of the two probability distributions. We compare the results of using KL divergence and JS divergence in Table (TreeFormer w/ LTC-JS), which shows the better performance of using KL divergence in estimating tree densities. KL is more suitable than JS in our task: KL divergence is asymmetric, which means it measures the difference between two density maps in one direction only . This makes it suitable in density estimation task where one density map is known to be a reference. In contrast, JS divergence is symmetric, it treats both density maps as equal. On the other hand, the performance of the model without using the proposed local tree count ranking loss is also evaluated (w/o LTR). Table <ref> shows that the E_MAE and E_MSE are increased by 5.63 and 10.64, respectively. Besides, not using the LTR also reduces the region-level accuracy and increases the E_G1, E_G2, and E_G3 by 7.17, 6.16, and 3.72, respectively. We also present a variant by utilizing a single ranking loss only for the last layer (D_1 in Fig. <ref>a) of the model (w/ STR). This has a weaker performance than applying it on the intermediate scales of the decoder. Image-level learning. At last, to show the advantage of using the proposed global tree count regularization, the performance of TreeFormer without using it (w/o GTR) is evaluated. Table <ref> demonstrates that the E_MAE and E_MSE are increased by 1.17 and 2.13, respectively. §.§.§ Analysis of the effect of the number of labeled images In this section, TreeFormer is examined in the case of using different amounts of labeled training data. In the first evaluation, 10% of labeled and 90% of unlabeled images are used for network training. This process is carried on for 20%, 30%, 40%, 60%, 80%, and 100% of labeled training data with the rest percent being the unlabeled training data, and the obtained values are shown in Fig. <ref>a. It can be seen that the counting accuracy using TreeFormer with 30% labeled images is already close to the fully-supervised model (the star point). Moreover, the performance of the network in supervised form without using the unlabeled images (S-TreeFormer, see Sec. <ref>) is also assessed (Fig. <ref>a). One can also see a big error reduction between S-TreeFormer and TreeFormer, which verifies the effectiveness of our semi-supervised framework. Next, we further investigate Treeformer by fixing 10% of labeled images while gradually adding more unlabeled images. In Fig. <ref>b we present the result with unlabeled images increased from 100 to 700 images (including the images in 𝒟_au). The E_MAE keeps decreasing in this process. Afterward, the number of the fixed labeled images is increased to 50%, while the number of unlabeled images is gradually added from 100 to 500 (Fig. ). It can be seen that the E_MAE is decreased from 21.4 using 100 unlabeled images to 19.4 using 500 unlabeled images. Finally, all labeled images are used and the number of unlabeled images is increased from 100 to 308. According to Fig., the E_MAE is decreased from 17.6 using 100 unlabeled images to 16.6 using 308 unlabeled images. Overall, by choosing a small number of labeled training data as opposed to a large number of unlabeled data, the effect of using more unlabeled data on the accuracy of the results becomes more apparent. § CONCLUSION In this paper, we propose a semi-supervised architecture based on transformer blocks for tree counting from single remote sensing images. In this network, the contextual attention-based feature fusion module is introduced to combine the extracted features during the encoding process with the decoding part of the network. In addition, the tree density regressor module is designed to estimate the tree density map after applying different perturbations. The tree counter token is introduced to calculate the total number of trees in the encoding phases and the obtained global count plays the role of the regulator to improve training performance. Moreover, we propose a pyramid learning strategy that includes local tree count ranking and local tree density consistency to leverage unlabeled images into the training. A new tailored tree counting dataset, KCL-London, is constructed with Google Earth images and the locations of the central points of tree canopies were annotated manually. The results on three datasets demonstrate that our method achieves superior performance compared with the state of the art in semi-supervised and supervised tasks. Counting trees has multiple applications in environmental intelligence and environmental management. The algorithm developed here is scalable to a range of commonly available high-resolution image types. Accessibility of open source high-resolution imagery is fundamental to being able to map and therefore manage trees in both urban and rural areas. Trees in different regions of the earth have different and varied shapes and canopies. It is not realistic to prepare training data from all available domains for network training, hence, improving the generalizability of the proposed network on the heterogeneous dataset (e.g. NEON dataset ) by domain generalization and adaptation techniques can be the future work. § ACKNOWLEDGEMENTS This project (ReSET) has received funding from the European Union’s Horizon 2020 FET Proactive Programme under grant agreement No 101017857. The contents of this publication are the sole responsibility of the ReSET consortium and do not necessarily reflect the opinion of the European Union. Miaojing Shi was also supported by the Fundamental Research Funds for the Central Universities. IEEEtran
http://arxiv.org/abs/2307.03950v1
20230708105034
Mod 2 instanton homology and 4-manifolds with boundary
[ "Kim A. Frøyshov" ]
math.GT
[ "math.GT", "math.DG" ]
Mod 2 instanton homology and 4-manifolds with boundary Kim A. Frøyshov ====================================================== Using instanton homology with coefficients in /2 we construct a homomorphism from the homology cobordism group to the integers which is not a rational linear combination of the instanton h–invariant and the Heegaard Floer correction term d. If an oriented homology 3–sphere Y bounds a smooth, compact, negative definite 4–manifold without 2–torsion in its homology then (Y)≥0, with strict inequality if the intersection form is non-standard. empty plain § INTRODUCTION This paper will introduce an integer invariant (Y) of oriented integral homology 3–spheres Y. This invariant is defined in terms of instanton cohomology with coefficients in /2 and may be regarded as a mod 2 analogue of the h–invariant <cit.>, which was defined with rational coefficients. Both invariants grew out of efforts to extend Donaldson's diagonalization theorem <cit.> to 4–manifolds with boundary. We will use the instanton (co)homology originally introduced by Floer <cit.>, an exposition of which can be found in <cit.>. With coefficients in /2, instanton cohomology I(Y;/2) comes equipped with some extra structure, namely two “cup products” u_2 and u_3 of degrees 2 and 3, respectively, and homomorphisms I^4(Y;/2)_0⟶/2_0'⟶ I^1(Y;/2) counting index 1 trajectories running into and from the trivial flat 2 connection, respectively. This extra structure enters in the definition of the invariant q_2, which is given in Section <ref>. Reversing the rôles of the cup products u_2,u_3 in the definition yields another invariant q_3. However, the present paper will focus on . It would be interesting to try to express the invariants h,q_2,q_3 in terms of the equivariant instanton homology groups recently introduced by Miller Eismeier <cit.>. We now describe some properties and applications of . For any oriented homology 3–spheres Y_0 and Y_1 one has (Y_0#Y_1)=(Y_0)+(Y_1). The proof of additivity is not quite straightforward and occupies more than half the paper. thm[Monotonicity] Let W be a smooth compact oriented 4-manifold with boundary W=(-Y_0)∪ Y_1, where Y_0 and Y_1 are oriented homology 3–spheres. Suppose the intersection form of W is negative definite and H^2(W;) contains no element of order 4. Then (Y_0)≤(Y_1). If the manifold W in the theorem actually satisfies b_2(W)=0 then one can apply the theorem to -W as well so as to obtain (Y_0)=(Y_1). This shows that descends to a group homomorphism →, where is the integral homology cobordism group. We observe that the properties of described so far also hold for the instanton h–invariant, the negative of its monopole analogue <cit.>, and the Heegaard Floer correction term d. Note that the latter three invariants are monotone with respect to any negative definite cobordism, without any assumption on the torsion in the cohomology. thm[Lower bounds] Let X be a smooth compact oriented 4-manifold whose boundary is a homology sphere Y. Suppose the intersection form of X is negative definite and H^2(X;) contains no 2-torsion. Let J_X:=H^2(X;)/torsion, and let w be an element of J_X which is not divisible by 2. Let k be the minimal square norm (with respect to the intersection form) of any element of w+2J_X. Let n be the number of elements of w+2J_X of square norm k. If k≥2 and n/2 is odd then equation* (Y)≥k-1. By an integral lattice we mean a free abelian group of finite rank equipped with a symmetric bilinear integer-valued form. Such a lattice is called odd if it contains an element of odd square; otherwise it is called even. cor Let X be as in Theorem <ref>. Let J_X⊂ J_X be the orthogonal complement of the sublattice of J_X spanned by all vectors of square -1, so that J_X is an orthogonal sum J_X=m-1⊕ J_X for some non-negative integer m. description (i)If J_X≠0, i.e. if J_X is not diagonal, then (Y)≥1. (ii)If J_X is odd then (Y)≥2. To deduce (i) from the theorem, take C:=v+2J_X where v is any non-trivial element of J_X of minimal square norm. To prove (ii), choose a v with minimal odd square norm. thm Let Y be the result of (-1) surgery on a knot K in S^3. If changing n^- negative crossings in a diagram for K produces a positive knot then 0≤(Y)≤ n^-. For k≥2 the Brieskorn sphere (2,2k-1,4k-3) is the boundary of a plumbing manifold with intersection form -_4k (see Section <ref>), and it is also the result of (-1) surgery on the (2,2k-1) torus knot. In these examples the upper bound on given by Theorem <ref> turns out to coincide with the lower bound provided by Theorem <ref>, and one obtains the following. For k≥2 one has ((2,2k-1,4k-3))=k-1. On the other hand, by <cit.> one has h((2,2k-1,4k-3))=⌊ k/2⌋, and in these examples the correction term d satisfies d=h/2, as follows from <cit.>. This shows: The invariant is not a rational linear combination of the h–invariant and the correction term d.□ In particular, h,:→ are linearly independent homomorphisms, and the same is true for d,. It follows from this that has a ^2 summand. However, much more is true: Dai, Hom, Stoffregen, and Truong <cit.> proved that has a ^∞ summand. Their proof uses involutive Heegaard Floer homology. The monotonicity of the invariants h,d, leads to the following result. Let Y by an oriented homology 3-sphere. If min(h(Y),d(Y))<0<(Y) then Y does not bound any definite 4-manifold without elements of order 4 in its second cohomology. An explicit example to which the theorem applies is 2(2,5,9)#-3(2,3,5). A related result was obtained by Nozaki, Sato, and Taniguchi <cit.>. Using a filtered version of instanton homology they proved that certain linear combinations of Brieskorn homology 3–spheres do not bound any definite 4–manifold. If an oriented homology 3-sphere Y satisfies h(Y)≤0<(Y) then I^5(Y;) contains 2–torsion, hence Y is not homology cobordant to any Brieskorn sphere (p,q,r). We conclude this introduction with two sample applications of the invariant . Let X be a smooth compact oriented connected 4-manifold whose boundary is the Poincaré sphere (2,3,5). Suppose the intersection form of X is negative definite. Let J_X be as in Corollary <ref>. (i) If J_X is even then J_X=0 or -E_8. (ii) If J_X is odd then H^2(X;) contains an element of order 4. Earlier versions of this result were obtained using instanton homology in <cit.> (assuming X is simply-connected) and in <cit.> (assuming X has no 2–torsion in its homology). There are up to isomorphism two even, positive definite, unimodular forms of rank 16, namely 2E_8 and _16. If Z denotes the negative definite E_8–manifold then the boundary connected sum Z#_Z has intersection form -2E_8. It is then natural to ask whether (2,3,5)#(2,3,5) also bounds -_16. There appears to be no obstruction to this coming from the correction term. Let X be a smooth compact oriented 4-manifold whose boundary is (2,3,5)#(2,3,5). Suppose the intersection form of X is negative definite and H^2(X;) contains no 2–torsion. If J_X is even then J_X=0, -E_8, or -2E_8. Further results on the definite forms bounded by a given homology 3–sphere were obtained by Scaduto <cit.>. Some of the results of this paper were announced in various talks several years ago. The author apologizes for the long delay in publishing the results. § THE BASE-POINT FIBRATION Let X be a connected smooth n–manifold, possibly with boundary, and P→ X a principal 3 bundle. Fix p>n and let A be a p1 connection in P. This means that A differs from a smooth connection by a 1–form which lies locally in L^p_1. Let _A be the group of p2 automorphisms (or gauge transformations) of P that preserve A. The connection A is called * irreducible if _A={1}, otherwise reducible; * Abelian if _A≈1; * twisted reducible if _A≈/2. Note that a non-flat reducible connection in P is either Abelian or twisted reducible. Recall that automorphisms of P can be regarded as sections of the bundle P3×3 of Lie groups, where 3 acts on itself by conjugation. An automorphism is called even if it lifts to a section of P3×2. A connection A in P is called even-irreducible if its stabilizer _A contains no non-trivial even automorpisms, otherwise A is called even-reducible. A non-flat connection is even-reducible if and only if it is Abelian. Now suppose X is compact and let be the space of all L^p_1 connections in P. The affine Banach space is acted upon by the Banach Lie group consisting of all L^p_2 automorphisms of P. Let ^*⊂ be subset of irreducible connections and define =/. The irreducible part ^*⊂ is a Banach manifold, and it admits smooth partitions of unity provided p>n is an even integer, which we assume from now on. Instead of ^* we often write ^*(P), or ^*(X) if the bundle P is trivial. Similarly for , etc. Let ^* be the space of all even-irreducible L^p_1 connections in P. Let be the group of even p2 automorphisms of P. As explained in <cit.>, there is an exact sequence 1→→→ H^1(X;/2)→0. The quotient ^*=^*/ is a Banach manifold. Let X be a topological space. (i) A class v∈ H^2(X;/2) is called admissible if v has a non-trivial pairing with a class in H_2(X;), or equivalently, if there exist a closed oriented 2–manifold and a continuous map f:→ X such that f^*v≠0. If and f can be chosen such that, in addition, f^*a=0 for every a∈ H^1(X;/2), then v is called strongly admissible. (ii) An 3 bundle E→ X is called (strongly) admissible if the Stiefel-Whitney class w_2(E) is (strongly) admissible. For example, a finite sum v=∑_ia_i∪ b_i with a_i,b_i∈ H^1(X;/2) is never strongly admissible. Let X be a compact, oriented, connected smooth 4–manifold with base-point x∈ X. Let P→ X be an 3 bundle. (i) If P is admissible then the 3 base-point fibration over ^*(P) lifts to a 2 bundle. (ii) If P is strongly admissible then the 3 base-point fibration over ^*(P) lifts to a 2 bundle. We spell out the proof of (ii), the proof of (i) being similar (or easier). Let be a closed oriented surface and f:→ X a continuous map such that f^*P is non-trivial and eqn:fa0 holds. We can clearly arrange that is connected. Because X≥2 it follows from <cit.> that f can be uniformly approximated by (smooth) immersions f_0. Moreover, if the approximation is sufficiently good then f_0 will be homotopic to f. Therefore, we may assume f is an immersion. Since base-point fibrations associated to different base-points in X are isomorphic we may also assume that x lies in the image of f, say x=f(z). We adapt the proof of <cit.>, see also <cit.>. Let →^*:=^*(P) be the oriented Euclidean 3–plane bundle associated to the base-point fibration. We must find an Hermitian 2-plane bundle such that is isomorphic to the bundle ^0_ of trace-free skew-Hermitian endomorphisms of . Let E→ X be the standard 3–plane bundle associated to P. Choose an Hermitian 2–plane bundle W→ together with an isomorphism ϕ:^0_W≈→ f^*E, and fix a connection A_,det in (W). Any (orthogonal) connection A in E induces a connection in f^*E which in turn induces a connection A_ in W with central part A_,det. Choose a spin structure on and let S^*± be the corresponding spin bundles over . For any connection A in E let _,A:S^+⊗ W→ S^-⊗ W be the Dirac operator coupled to A_. If A is an L^p_1 connection, p>4, and A_0 is a smooth connection in E then A-A_0 is continuous, hence _,A-_,A_0 defines a bounded operator L^2→ L^2 and therefore a compact operator L^2_1→ L^2. Let :=(_,W) be the determinant line bundle over (E) associated to the family of Fredholm operators _,A:L^2_1→ L^2. Then automorphism (-1) of W acts on with weight equal to the numerical index of _,A. According to Atiyah-Singer's theorem <cit.> this index is (_,A)={ch(W)Â()}·[]=c_1(W)·[]. But the mod 2 reduction of c_1(W) equals f^*(w_2(E)), which is non-zero by assumption, so the index is odd. The assumption eqn:fa0 means that every automorphism of E pulls back to an even automorphism of f^*E. Moreover, every even automorphism of f^*E≈^0_W lifts to an automorphism of W of determinant 1, the lift being well-defined up to an overall sign since is connected. Because the automorphism (-1) of W acts trivially on ⊗ W_z this yields an action of (E) on ⊗ W_z. The quotient :=(⊗ W_z)/(E) is a complex 2-plane bundle over ^*(E). We claim that there is an Hermitian metric on such that on every fibre _A there is an Hermitian metric for which the projection _A⊗ W_z→_[A] is an isometry. To see this, let S⊂(E) be any local slice for the action of (E), so that S projects diffeomorphically onto an open subset U⊂^*(E). Choose any Hermitian metric on |_S and let g_U be the induced Hermitian metric on _U≈(⊗ W_z)|_S. Now cover ^*(E) by such open sets U and patch together the corresponding metrics g_U to obtain the desired metric on . Given any Hermitian metric on a fibre _A there are linear isometries ^0__A⊗ W_z≈→^0_W_z≈→ E_x, where the first isometry is canonical and independent of the chosen metric on _A and the second one is given by ϕ. This yields an isomorphism ^0_≈→.□ § MODULI SPACES Let P→ Y be a principal 3 bundle, where Y is a closed oriented 3–manifold. The Chern-Simons functional :(P)→/ is determined up to an additive constant by the property that if A is any connection in the pull-back of P to the band [0,1]× Y then (A_1)-(A_0)=∫_[t_0,t_1]× Y F_A∧ F_A, where A_t denotes the restriction of A to the slice {t}× Y, and ·∧· is formed by combining the wedge product on forms with minus the Killing form on the Lie algebra of 3. If P=Y×3 then we normalize so that its value on the product connection θ is zero. If v is any automorphism of P then for any connection B in P one has (v(B))-(B)=-1/2(v), where the degree (v) is defined to be the intersection number of v with the image of the constant section 1. Equation eqn:csdeg, up to an overall sign, was stated without proof in <cit.>. A proof of eqn:csdeg can be obtained by first observing that the left-hand side of the equation is independent of B, and both sides define homomorphisms from the automorphism group of P into . Replacing v by v^2 it then only remains to verify the equation for even gauge transformations, which is easy. If v lifts to a section v of P3×2 then (v)=2( v), where ( v) is the intersection number of v with the image of the constant section 1. In particular, every even automorphism of P has even degree. The critical points of the Chern-Simons functional are the flat connections in P. In practice, we will add a small holonomy perturbation to as in <cit.>, but this will usually not be reflected in our notation. Let (P) denote the space of all critical points of modulo even automorphisms of P. The even-reducible part of (P) is denoted by ^*(P). If Y is an (integral) homology sphere then P is necessarily trivial and we write (Y)=(P). Now let X be an oriented Riemannian 4–manifold with tubular ends [0,∞)× Y_i, i=0,…,r, such that the complement of :=⋃_i [0,∞)× Y_i is precompact. We review the standard set-up of moduli spaces of anti-self-dual connections in a principal 3 bundle Q→ X, see <cit.>. Given a flat connection ρ in Q|_, we define the moduli space M(X,Q;ρ) as follows. Choose a smooth connection A_0 in Q which agrees with ρ outside a compact subset of X. We use the connection A_0 to define Sobolev norms on forms with values in the adoint bundle _Q of Lie algebras associated to Q. Fix an even integer p>4. Let =(Q) be the space of connections in Q of the form A_0+a with a∈ pw1, where w is a small, positive exponential weight as in <cit.>. There is a smooth action on by the Banach Lie group consisting of all p2 gauge transformation u of Q such that ∇_A_0u· u∈ pw1. Let :=/ and let M(X,Q;ρ) be the subset of consisting of gauge equivalence classes of connections A satisfying F^+_A=0. In practice, we will often add a small holonomy perturbation to the ASD equation, but this will usually be suppressed from notation. We observe that the value of the Chern-Simons integral (Q,ρ):=-1/8π^2∫_X F_A∧ F_A is the same for all A∈. (If X is closed then the right hand side of Equation eqn:ka-int equals the value of -p_1(Q) on the fundamental class of X. This normalization will be convenient in Section <ref>.) If u is an automorphism of Q|_ then from Equations eqn:cs-int-band and eqn:csdeg we deduce that (Q,u(ρ))-(Q,ρ)=2∑_i(u_i), where u_i is the restriction of uto the slice {0}× Y_i. Similarly, for the expected dimensions we have M(X,Q;u(ρ))-M(X,Q;ρ)=4∑_i(u_i). On the other hand, if u extends to a smooth automorphism of all of Q then ∑(u_i)=0, and the converse holds at least if u is even. Given the reference connection A_0, we can identify the restriction of the bundle Q to an end [0,∞)× Y_i with the pull-back of a bundle P_i→ Y_i. Let _i∈(P_i) be the element obtained by restricting ρ to any slice {t}× Y_i where t>0. We will usually assume that each _i is non-degenerate. The above remarks show that the moduli space M(X,Q;ρ) can be specified by the r–tuple =(_1,…,_r) together with one extra piece of data: Either the Chern-Simons value =(Q,ρ) or the expected dimension d of M(X,Q;ρ). We denote such a moduli space by M_(X,Q;) or M_(d)(X,Q;). Note that for given there is exactly one moduli space M_(d)(X,Q;) with 0≤ d≤7; this moduli space will just be denoted by M(X,Q;). For any anti-self-dual connection A over X, the energy _A(Z) of A over a measurable subset Z⊂ X is defined by _A(Z):=-∫_Z F_A∧ F_A =∫_Z|F_A|^2. If X= and Z=I× Y for some interval I then we write _A(I) instead of _A(I× Y). § SPACES OF LINEARLY DEPENDENT VECTORS This section provides background for the definition of the cup product u_2 as well as results which will be used in the proof of Proposition <ref>. For any finite-dimensional real vector space V set L(V):={(v,w)∈ V⊕ Vv,w are linearly dependent in V}. Then L(V) is closed in V⊕ V and L^*(V):=L(V)∖{(0,0)} is a smooth submanifold of V⊕ V of codimension n-1, where n is the dimension of V. As a short-hand notation we will often write v∧ w=0 to express that v,w are linearly dependent. If B is any smooth Banach manifold and π:E→ B a smooth real vector bundle of finite rank let L^*(E)→ B be the associated smooth fibre bundle whose fibre over a point x∈ B is L^*(E_x), where E_x=π(x). Similarly, let L(E)→ B be the topological fibre bundle with fibre L(E_x) over x. Let ℓ→ S^1 be the non-trivial real line bundle such that for z∈ S^1 the fibre of ℓ over z^2 is the line z in . Let E:=E× S^1 and ℓ:=B×ℓ be the pull-backs of the bundles E and ℓ, respectively, to B× S^1. We identify R^2=, so that (a,b)=a+bi for real numbers a,b. Let s=(s_1,s_2) be a nowhere vanishing smooth section of E⊕ E. Let be the section of E⊗ℓ such that for any p∈ B and z=(x_1,x_2)∈ S^1 one has (p,z^2)=(x_1s_1(p)+x_2s_2(p))⊗ z. (i) The projection B× S^1→ B maps the zero-set of bijectively onto the locus in B where s_1,s_2 are linearly dependent. (ii) A zero (p,w) of is regular if and only if s is transverse to L^*(E) at p. The proof of (i) is left as an exercise. To prove (ii) we may assume E is trivial, so that s_j is represented by a smooth map f_j:B→ V for some finite-dimensional real vector space V. We observe that for any u_1,u_2∈ V and z=(x_1,x_2)∈ S^1 one has (u_1,u_2)=(x_1u_1+x_2u_2)⊗ z+(x_1u_2-x_2u_1)⊗ iz as elements of V⊕ V=V⊗_. It follows that the tangent space of L^*(V) at a point (v_1,v_2) which satisfies x_1v_1+x_2v_2=0 is given by T_(v_1,v_2)L^*(V)=V⊗ iz+(x_1v_2-x_2v_1)⊗ z. Now suppose (p,w) is a zero of and s(p)=(v_1,v_2), z^2=w. Then eqn:tlv holds. Let L_j:T_pB→ V be the derivative of f_j at p. Then (p,w) is a regular zero of precisely when V is spanned by the vector x_1v_2-x_2v_1 together with the image of the map x_1L_2+x_2L_2. From eqn:u1u2 we see that the latter condition is also equivalent to s being transverse to L^*(V) at p.□ We record here a description of the sections of E⊗ℓ which will be used in the proof of Proposition <ref> below. Let _a( E) denote the space of all sections s∈( E) such that s(p,-z)=-s(p,z) for all (p,z)∈ B× S^1. Then there is a canonical real linear isomorphism ( E⊗ℓ)→_a( E), ↦ characterized by the fact that (p,z^2)=(p,z)⊗ z for all (p,z)∈ B× S^1.□ If B is finite-dimensional, the bundle E has rank 3, and s is a generic smooth section of E⊕ E then s(L(E)) represents the Poincaré dual of the second Stiefel-Whitney class w_2(E) in the following sense. Given any class a∈ H_2(B;/2), represented by a generic smooth map f:→ B where is a closed surface, then a,w_2(E)≡#(s∘ f)(L(E))2. § “GENERIC” SECTIONS Let B be a smooth Banach manifold and π:E→ B a smooth real vector bundle of finite rank. If B is infinite-dimensional then we do not define a topology on the space (E) of (smooth) sections of E, so it makes no sense to speak about residual subsets of (E). Instead, we will say a subset Z⊂(E) is “residual” (in quotation marks) if there is a finite-dimensional subspace ⊂(E) such that for every finite-dimensional subspace '⊂(E) containing and every section s of E there is a residual subset ⊂' such that s+⊂ Z. Note that “residual” subsets are non-empty, and any finite intersection of “residual” subsets is again “residual”. We will say a given property holds for a “generic” section of E if it holds for every section belonging to a “residual” subset of (E). We indicate one way of constructing such subspaces . Suppose B supports smooth bump functions, i.e. for any point x∈ B and any neighbourhood U of x there exists a smooth function c:B→ such that c(x)≠0 and c=0 outside U. Given a compact subset K of B, one can easily construct a finite-dimensional subspace ⊂(E) such that, for every x∈ K, the evaluation map → E_x, s↦ s(x) is surjective. Therefore, if we are given a collection of smoooth maps f_k:M_k→ B, k=1,2,…, where each M_k is a finite-dimensional manifold and the image of each f_k is contained in K then, for a “generic” section s of E, the map s∘ f_k:M_k→ E is transverse to the zero-section in E for each k. § INSTANTON COHOMOLOGY AND CUP PRODUCTS In this section we will work with 3 connections modulo even gauge transformation (see Section <ref>), although this will not be reflected in our notation. In particular, we write ^* instead of ^*. This notational convention applies only to this section. (In Subsection <ref>, which only deals with homology spheres, the convention is irrelevant.) §.§ Instanton cohomology Let Y be a closed oriented connected 3-manifold and P→ Y an 3 bundle. If Y is not an homology sphere then we assume P is admissible. For any ,β∈(P) let M(,β) denote the moduli space of instantons in the bundle × P→ with flat limits at -∞ and β at ∞ and with expected dimension in the interval [0,7]. Let (,β)=M(,β)/, where acts by translation. If ,β are irreducible then the relative index (,β)∈/8 is defined by (,β)= M(,β)8. For any commutative ring R with unit we denote by I(P;R) the relatively /8 graded instanton cohomology with coefficients in R as defined in <cit.>. Recall that this is the cohomology of a cochain complex (C(P;R),d) where C(P;R) is the free R–module generated by ^*(P) and the differential d is defined by d=∑_β#(,β)·β. Here, # means the number of points counted with sign, and the sum is taken over all β∈^*(P) satisfying (,β)=1. If P is admissible then ^*(P)=(P). If instead Y is an homology sphere then (P)=(Y) contains exactly one reducible point θ, represented by the trivial connection. The presence of the trivial connection provides C(P;R)=C(Y;R) with an absolute /8 grading defined by ()= M(θ,)8. The trivial connection also gives rise to homomorphisms C^4(Y;R)→ R'→ C^1(Y;R) defined on generators by =#(,θ), 1=∑_β#(θ,β)·β, where we sum over all β∈^*(Y) of index 1. These homomorphisms satisfy d=0 and d'=0 and therefore define I^4(Y;R)_0→ R_0'→ I^1(Y;R). We conclude this subsection with some notation for energy. If A is any ASD connection in the bundle Q:=× P and I is any interval then we write _A(I) instead of _A(I× Y). Moreover, if ,β∈(Y) and the moduli space M(,β) is expressed as M(,Q;ρ) in the notation of Section <ref> then we define (,β):=1/4(Q,ρ), which equals the total energy of any element of M(,β). (Note, however, that M(,β) may be empty.) §.§ Cup products We continue the discussion of the previous subsection, assuming P is admissible unless Y is an homology sphere. In most of this paper the coefficient ring R will be /2, and we write I(P):=I(P;/2). For j=2,3 we will define a degree j endomorphism u_j:I^*(P)→ I^*+j(P). Insofar as the Floer cohomology is some kind of Morse cohomology of ^*(P), one may think of u_j as cup product with the jth Stiefel-Whitney class of the base-point fibration over ^*(P). The map u_j will be induced by an endomorphism v_j:C^*(P)→ C^*+j(P) which we now define. For any t∈ set t:=[t-1,t+1]× Y. Let P_0=[-1,1]× P denote the pull-back of the bundle P to 0. For any ,β∈(P) and any irreducible point ∈ M(,β) let [t]:=|_Y[t]∈^*(P_0) denote the restriction of to the band Y[t]. (The fact that [t] is irreducible follows from Proposition prop:unique-continuation-cylinder.) Choose a base-point y_0∈ Y, and let →^*(P_0) be the natural real vector bundle of rank 3 associated to the base-point (0,y_0)∈0. To define v_3, choose a “generic” smooth section s_1 of . For any ,β∈^*(P) with (β)-()≡38 the matrix coefficient v_3,β is defined to be equation v_3,β:=#{∈M(,β)s_1([0])=0}, where # means the number of points counted modulo 2. To define v_2, let s_2,s_3 be a pair of smooth sections of which define a “generic” section of ⊕. For any ,β∈^*(P) with (β)-()≡28 the matrix coefficient v_2,β is defined to be equation v_2,β:= #{∈M(,β)s_2,s_3 are linearly dependent at [0]}. Note that, for dimensional reasons, s_2 and s_3 cannot simultaneously vanish at [0] for any ∈ M(,β). prop For j=2,3 one has dv_j=v_jd as homomorphisms C^*(P)→ C^*+j+1(P). To prove this for j=2, let ,β∈^*(P) with (β)-()≡38. The number of ends of the 1-manifold {∈ M(,β)s_2,s_3 are linearly dependent at [0]}, counted modulo 2, is (dv_2+v_2d),β. Since the number of ends must be even, this proves the assertion for j=2. The case j=3 is similar. □ The homomorphism u_j:I^*(P)→ I^*+j(P) induced by v_j is independent of the sections s_i. For u_3 this will follow from Lemma <ref> below, and a similar argument works for u_2. We consider again the bundle P_0=[-1,1]× P over Y[0]=[-1,1]× Y. Let U be an open subset of ^*(P_0) such that for all ,β∈^*(P) with (,β)≤3 and every ∈ M(,β) one has that [0]∈ U. A section s of |_U is said to satisfy Property 3 if for all ,β as above the map M(,β)→, ↦ s([0]) is transverse to the zero-section in . Let U⊂^*(P_0) be as in Definition <ref> and suppose s,s' are sections of |_U satisfying Property 3. Let v_3,v'_3 be the corresponding cup products defined as in eqn:v3def. Then there is an endomorphism H:C(P)→ C(P) such that v_3+v'_3=dH+Hd. For a “generic” section of the map f_β:M(,β)×[0,1]→, ↦(1-t)s([0])+ts'([0])+t(1-t)([0]) is transverse to the zero-section whenever (,β)≤3. Fix such a and let Z_β denote the zero-set of f_β. If (,β)=2 then Z_β is a finite set. Let H be the homomorphism with matrix coefficients H,β=#Z_β. If (,β)=3 then Z_β is a compact 1–manifold-with-boundary. Counted modulo 2, the number of boundary points of Z_β is (v_3+v'_3),β, whereas the number of ends is (dH+Hd),β. These two numbers must agree, proving the lemma.□ Let W be a smooth, compact, oriented, connected 4–manifold with two boundary components, say W=-Y_0∪ Y_1. Let Q→ W be an 3 bundle, and let P_i be the restriction of Q to Y_i. Suppose one of the following two conditions holds. (i) At least one of the bundles P_0,P_1 is admissible. (ii) Both Y_0 and Y_1 are homology spheres, the bundle Q is trivial, and H_1(W;)=0 and b_+^2(W)=0. Then the homomorphism T:I(P_0)→ I(P_1) induced by (W,Q) satisfies Tu_j=u_jT for j=2,3. Moreover, if (ii) holds then T=:I^4(Y_0)→/2.□ If P→ Y is an admissible 3 bundle then u_3=0 on I(P). By Proposition <ref> there is an Hermitian 2–plane bundle →^* such that ≈^0_. For a “generic” section s of , we have s([0])≠0 whenever lies in a moduli space M(,β) of dimension at most 3. Given such a section s, let U be the open subset of ^* where s≠0. Then |_U splits as an orthogonal sum |_U=⊕ L of two complex line bundles. Hence |_U has a nowhere vanishing trace-free skew-Hermitian endomorphism ( [ i 0; 0 -i ]). This yields a non-vanishing section s' of |_U. Let s be the restricion to U of a “generic” section of , and let v_3,v'_3 be the cup products defined by s,s', respectively. Then v'_3=0, so by Lemma <ref> we have v_3=dH+Hd. By definition, v_3 induces the cup product u_3 in cohomology, so u_3=0.□ Let Y be an oriented homology 3–sphere and Y' the result of (±1) surgery on a knot in Y. Let n be a non-negative integer. (i) If (u_3)^n=0 on I(Y) then (u_3)^n+1=0 on I(Y'). (ii) If (u_2)^n=0 on I(Y) and has genus 1 then (u_2)^n+1=0 on I(Y'). If R is a commutative ring and A⟶ B⟶ C an exact sequence of modules over the polynomial ring R[u] such that u^m=0 on A and u^n=0 on C for non-negative integers m,n then u^m+n=0 on B. (Here, u^0 acts as the identity map.) Now suppose Y' is (-1) surgery on . (If instead Y' is (+1) surgery on then the proof is similar with the roles of Y,Y' reversed.) Let Y” be 0 surgery on and I(Y”) the instanton cohomology of the non-trivial 3 bundle over Y”. We apply the above observation to the long exact surgery sequence (see <cit.>) ⋯→ I(Y”)→ I(Y)→ I(Y')→ I(Y”)→⋯ Statement (i) now follows from Proposition <ref>. To prove (ii), recall that if P_T^3 is a non-trivial 3 bundle over the 3–torus then I(P_T^3) is non-zero in two degrees differing by 4 modulo 8 and zero in all other degrees. Therefore, u_2=0 on I(P_T^3). If has genus 1 then by arguing as in the proof of <cit.> we find that u_2=0 on I(Y”), from which (ii) follows.□ As a special case of Proposition <ref> we have the following corollary. If Y is (±1) surgery on a knot in S^3 then u_3=0 on I(Y). Let P→ Y be an 3 bundle. We assume P is admissible if Y is not a homology sphere. Then the endomorphisms u_2 and u_3 on I(P) are nilpotent. In other words, there is a positive integer n such that u_2^n=0, u_3^n=0 on I(P). We use the same link reduction schemes as in the proofs of <cit.>. In the present case there is no need to consider any reduced groups, as the cup products u_j are defined on all of I(Y).□ We include here a result for oriented homology 3–spheres Y obtained by adapting the proof of Proposition <ref> for j=2 to 2–dimensional moduli spaces M(,θ). This result will be used in Proposition <ref> below. For any ∈^*(Y) we introduce the temporary notation M_:={∈ M(,θ)s_2∧ s_3=0 at [0], and _([0,∞))≥}, where is a small positive constant. If M(,θ)<6 then M_ is a manifold-with-boundary, and M_ has a description analogous to that of M_, just replacing the inequality _([0,∞))≥ by an equality. We define homomorphisms :C^2(Y)→/2, ^-:C^3(Y)→/2 on generators by :=#( M_), ^-β:=# M_β. v_2+^-d=. Let ∈^*(Y), ()=2. Then M_ is a 1–manifold-with-boundary. The number of boundary points, counted modulo 2, is by definition, and this must agree with the number of ends of M_, which is ( v_2+^-d).□ §.§ Commutators of cup products Let Y be an oriented homology 3–sphere. We introduce a degree 4 endomorphism ϕ:C^*(Y)→ C^*+4(Y) which will be used to describe the commutator of v_2 og v_3. defn For any ,β∈^*(Y) let 23(,β) be the subspace of × consisting of those points (,t) satisfying the following conditions: itemize * s_1([-t])=0, * s_2([t]) and s_3([t]) are linearly dependent. If (β)-()≡48 then 23(,β) consists of a finite number of points (see part (I) of the proof of Proposition <ref> below), and we set ϕ,β:=#23(,β). prop If Y is an oriented integral homology 3-sphere then for “generic” sections s_1,s_2,s_3 one has equation v_2v_3+v_3v_2+'=dϕ+ϕd. Hence, on I(Y) one has equation u_2u_3+u_3u_2=_0'_0. The proof will be given in Subsection <ref>. Let v_3,v_3':C^*(Y)→ C^*+3(Y) be the cup products defined by “generic” sections s,s' of . At least in degrees different from 4, the commutator of v_3 and v_3' is given by a formula analogous to eqn:v2v3chhom. This formula involves the homomorphism ψ:C^p(Y)→ C^p+5(Y), p≠4 with matrix coefficients ψ,β=#{(,t)∈× s([-t])=0=s'([t])}. The condition p≠4 is imposed to make sure that factorizations through the trivial connection do not occur in the moduli spaces M(,β). For q≢3,48 one has dψ+ψ d=v_3v'_3+v'_3v_3 as maps C^q(Y)→ C^q+6(Y). If the sections s,s' are sufficiently close (in a certain sense) then v_3=v_3' (see Lemma <ref> below) and the following hold. If the sections s,s' are sufficiently close then there exist * an extension of ψ to a cochain map C^*(Y)→ C^*+5(Y) defined in all degrees, and * a homomorphism Ξ:C^*(Y)→ C^*+4(Y) such that ψ=v_2v_3+dΞ+Ξ d. The proof will be given in Subsection <ref>. § DEFINITION OF THE INVARIANT Let Y be any oriented homology 3-sphere. defn We define a non-negative integer ζ_2(Y) as follows. If _0=0 on (u_3)⊂ I(Y) set ζ_2(Y):=0. Otherwise, let ζ_2(Y) be the largest positive integer n for which there exists an x∈(u_3) such that _0u_2^kx= 0 for 0≤ k<n-1, 1 for k=n-1. Here, u_2^k denotes the k'th power of the endomorphism u_2. Note that if x is as in Definition <ref> then using the relation eqn:u2u3 one finds that u_3u_2^kx=0 for 0≤ k≤ n-1. defnSet (Y):=ζ_2(Y)-ζ_2(-Y). An alternative description of will be given in Proposition <ref> below. If ('_0)⊂(u_3) in I^1(Y) then ζ_2(-Y)=0. Otherwise, ζ_2(-Y) is the largest positive integer n for which the inclusion (u_2^k'_0)⊂(u_3)+∑_j=0^k-1(u_2^j'_0) in I(-Y) holds for 0≤ k<n-1 but not for k=n-1. Of course, in eqn:imu2delincl it suffices to sum over those j that are congruent to k mod 4, since I(-Y) is mod 8 periodic. Recall that I^q(Y) and I^5-q(-Y) are dual vector spaces for any q∈/8. Furthermore, the maps _0:I^4(Y)→/2, u_3:I^q(Y)→ I^q+j(Y) are dual to '_0:/2→ I^1(-Y), u_3:I^5-q-j(-Y)→ I^5-q(-Y), respectively. In general, the kernel of a linear map between finite-dimensional vector spaces is equal to the annihilator of the image of the dual map. Applying this to _0u_2^j:I^4-2j(Y)→/2 we see that the inclusion eqn:imu2delincl holds if and only if (_0u_2^k)⊃(u_3)∩⋂_j=0^k-1(_0u_2^j) in I(Y). This proves the lemma.□ prop Either ζ_2(Y)=0 or ζ_2(-Y)=0. Suppose ζ_2(Y)>0, so there is an x∈ I^4(Y) such that u_3x=0 and _0x=1. Then Proposition <ref> yields '_0(1)=u_3u_2x, hence ζ(-Y)=0 by Lemma <ref>.□ We now reformulate the definition of ζ_2 in terms of the mapping cone of v_3. This alternative definition will display a clear analogy with the instanton h-invariant and will be essential for handling the algebra involved in the proof of additivity of . For q∈/8 set MC^q(Y):=C^q-2(Y)⊕ C^q(Y), and define D:MC^q(Y)→ MC^q+1(Y), (x,y)↦(dx,v_3x+dy). Then D∘ D=0, and we define MI(Y) to be the cohomology of the cochain complex (MC(Y),D). The short exact sequence of cochain complexes 0→ C^*(Y)→ MC^*(Y)τ→ C^*-2(Y)→0, where (y)=(0,y) and τ(x,y)=x, gives rise to a long exact sequence equation ⋯→I^q-3(Y)u_3→I^q(Y)_* →MI^q(Y)τ_*→I^q-2(Y)→⋯. We introduce some extra structure on *j(Y). Firstly, the homomorphisms gather* :=∘τ:MC^6(Y)→/2, ':=∘':/2→MC^1(Y) induce homomorphisms MI^6(Y)_0⟶/2'_0⟶ MI^1(Y). We extend trivially to all of MC(Y), and similarly for _0. Furthermore, we define a homomorphism V:MC^*(Y)→ MC^*+2(Y), (x,y)↦(v_2x,ϕ x+v_2y). A simple calculation yields equation DV+VD=', which is analogous to the relation <cit.> in rational instanton homology. It follows that V induces homomorphisms gather* MI^q(Y)→MI^q+2(Y), q≢6,78, MI^6(Y)∩(_0)→MI^0(Y), each of which will be denoted by U. If _0=0 on MI^6(Y) then ζ_2(Y)=0. Otherwise, ζ_2(Y) is the largest positive integer n for which there exists a z∈ MI(Y) such that _0 U^kz=cases 0 for 0≤ k<n-1, 1 for k=n-1. This follows immediately from the definitions.□ § DEFINITE 4-MANIFOLDS The goal of this section is to prove Theorem <ref>. Let X be an oriented, connected Riemannian 4–manifold with a cylindrical end [0,∞)× Y, where Y is an integral homology sphere. Suppose b_1(X)=0=b^+(X). Let E→ X be an oriented Euclidean 3–plane bundle and w_2(E) its second Stiefel-Whitney class. We will count reducibles in ASD moduli spaces for E with trivial asymptotic limit. Let w∈ H^2(X,;/2) be the unique lift of w_2(E). Abusing notation, we denote by w_2(E)^2∈/4 the value of the Pontryagin square w^2∈ H^4(X,;/4) on the fundamental class in H_4(X;;/4). Then for ∈^*(Y) the expected dimension of a moduli space for E with asymptotic limit satisfies M_(X,E;)≡()-2w_2(E)^28. If ρ is a trivial connection in E|_ then (E,ρ) is an integer reducing to -w_2(E)^2 modulo 4. Hence, M_k:=M_k(X,E;θ) is defined for integers k satisfying k≡-w_2(E)^24. Moreover, M_k is empty for k<0, and M_0 (when defined) consists of flat connections. The expected dimension is M_k=2k-3. §.§ Reducibles In this subsection we restrict to k>0. After perturbing the Riemannian metric on X in a small ball we can arrange that M_k contains no twisted reducibles (see <cit.>). The set M_k of reducible (i.e. Abelian) points in M_k has a well known description in terms of the cohomology of X, which we now recall. Let P:={c∈ H^2(X;) [c]_2=w_2(E), c^2=-k}, where [c]_2 denotes the image of c in H^2(X;/2). Let P:= P/±1 be the quotient of P by the involution c↦-c. There is a canonical bijection M_k→ P. If [A]∈ M_k then A respects a unique splitting E=⊕ L, where is a trivial rank 1 subbundle of E. A choice of orientation of defines a complex structure on L. Mapping [A] to the point in P represented by c_1(L) yields the desired bijection. For further details see <cit.> and <cit.>.□ Assuming P is non-empty we now express the number |P| of elements of P in terms of the intersection form of X and the torsion subgroup of H^2(X;). For any v∈ H^2(X;) let v̅ denote the image of v in H^2(X;)/. Choose a∈ P and let Q_a:={r∈ H^2(X;)/ r≡a̅ mod 2, r^2=-k}. Define Q_a:= Q_a/±1. |P|=|2|·|Q_a|. Note that 2 has even order precisely when H^2(X;) contains an element of order 4. Because k>0 we have that (-1) acts without fixed-points on both P and Q_a. Therefore, | P|=2|P|, | Q_a|=2|Q_a|. The short exact sequence 0→2→→/2→0 gives rise to a long exact sequence ⋯→ H^2(X;)2→ H^2(X;)→ H^2(X;/2)→ H^3(X;)→⋯. From this sequence we see that there is a well defined map P→ Q_a, c↦c̅ which descends to an injective map f: P/2→ Q_a. In fact, f is bijective. To see that f is surjective, let r∈ Q_a. Then r=a̅+2x̅=a+2x for some x∈ H^2(X;), and a+2x∈ P. This shows that | P|=|2|·| Q_a|. Combining this with eqn:2PQ we obtain the proposition.□ §.§ 2–torsion invariants of 4–manifolds The proof of Theorem <ref> will involve certain 2–torsion Donaldson invariants which we now define. Let d_0 be the smallest expected dimension of any moduli space M_k=M_k(X,E;θ) that contains a reducible, where k is a non-negative integer. For any pair (r,s) of non-negative integers satisfying 2r+3s≤ d_0+2 we will define an element rs= rs(X,E)∈ I(Y) which will be independent of the Riemannian metric on X and also independent of the choice of small holonomy perturbations. To define rs, choose disjoint compact codimension 0 submanifolds Z_1,…,Z_r+s of X and base-points z_j∈ Z_j. It is convenient to assume that each of these submanifolds contains a band [t_j,t_j+1]× Y for some t_j≥1. (We assume that the perturbed ASD equation is of gradient flow type in the region [1,∞)× Y.) Then Proposition <ref> guarantees that every perturbed ASD connection in E with irreducible limit will restrict to an irreducible connection over each Z_j. Choose “generic” sections {_ij}_i=1,2,3 of the canonical 3–plane bundle _j→^*(Z_j,E_j), where E_j:=E|_Z_j. For any ∈^*(Y) let d=d() be the integer such that 0≤ d-2r-3s≤7, d≡()-2w_2(E)^28. Let M_r,s(X,E;) be the set of all ∈ M_(d)(X,E;) such that * _2,j,_3,j are linearly dependent at |_Z_j for j=1,…,r, and * _1,j(|_Z_j)=0 for j=r+1,…,r+s. Let q_r,s:=∑_#M_r,s(X,E;)·∈ C(Y), where the sum is taken over all generators in C(Y) of index 2w_2(E)^2+2r+3s. Then q_r,s is a cocycle, and we define rs(X,E):=[q_r,s]∈ I(Y). Standard arguments show that rs is independent of the choice of submanifolds Z_j and sections _ij. Let k be an integer greater than one. If M_ℓ is empty for ℓ<k then k-20=#M_k. Deleting from M_k a small neighbourhood of each reducible point we obtain a manifold-with-boundary W with one boundary component P_η for each reducible η, each such component being diffeomorphic to k-2. Let Ŵ:=W∩ M_k-2,0(X,E;θ) be the set of all ∈ W such that _2,j and _3,j are linearly dependent at |_Z_j for j=1,…,k-2. Then Ŵ is a 1–manifold-with-boundary. For dimensional reasons and because of the condition that M_ℓ be empty for ℓ<k, bubbling cannot occur in sequences in Ŵ. Therefore, the only source of non-compactness in Ŵ is factorization over the end of X, so the number of ends of Ŵ equals k-20 modulo 2. As for the boundary points of Ŵ, observe that for every x∈ X the restriction of the 3–plane bundle _θ,x→ M^*_k to P_η is isomorphic to the direct sum ⊕ L of a trivial real line bundle and the tautological complex line bundle. It follows easily from this that P_η∩Ŵ has an odd number of points for every reducible η, hence |Ŵ|≡|M_k|2. Since the number of boundary points of Ŵ must agree with the number of ends when counted modulo 2, this proves the proposition.□ In the proof of the following proposition and at many places later we will make use of a certain kind of cut-off function. This should be a smooth function b:→ such that b(t)= 0 for t≤-1, 1 for t≥1. Suppose 2r+3s≤ d_0+2, so that rs is defined. (i) rs=u_2r-1s if r≥1. (ii) rs=u_3 rs-1 if s≥1. We only spell out the proof of (ii), the proof of (i) being similar. Let M_r,s-1(X,E;) be defined as above, but using only the submanifolds Z_1,…,Z_r+s-1 and the corresponding sections _ij. Choose a path :[-1,∞)→ X such that (-1)=z_r+1 and (t)=(t,y_0) for t≥0, where y_0∈ Y is a base-point. For any ∈^*(Y) and x∈ X let _,x→ M_r,s-1(X,E;) be the canonical 3–plane bundle associated to the base-point x. For any =[A]∈ M_r,s-1(X,E;) and t≥-1 let _,t:(_,(t))_→(_,(-1))_ be the isomorphism defined by the holonomy of A along . Here, (_,x)_ denotes the fibre of the bundle _,x at the point . Given a “generic” section s of →^*(Y[0]) we define a section s_ of the bundle _,(-1)×[-1,∞)→ M_r,s-1(X,E;)×[-1,∞) by s_(,t):=(1-b(t-2))·_1,r+s(|_Z_r+s) +b(t-2)·_,t(s([t])), where b is as in eqn:b-prop1. Let j:=2w_2(E)^2+2r+3s∈/8. If ()=j-1 then the zero set s_(0) is a finite set. Summing over such we define h_r,s:=∑_(#s_(0))·∈ I^j(Y). Counting ends and boundary points of the 1–manifolds s_β(0) for (β)=j we see that dh_r,s+v_3q_r,s-1=q_r,s. Passing to cohomology, we obtain (ii).□ If E is strongly admissible then D_r,s(X,E)=0 for s>0. Let f:→ X be as in Definition <ref> with v=w_2(E). For t≥0 let X t be the result of deleting from X the open subset (t,∞)× Y. Choose t>0 so large that X t contains f(). Then E|_X t is strongly admissible. Choose the submanifolds Z_1,…,Z_r+s such that Z_r+s=X t. By Proposition <ref> the (frame bundle of) _j→^*(E_r+s) lifts to a 2 bundle. For j=1,…,r+s-1 choose “generic” sections {_ij}_i=1,2,3 of _j. Arguing as in the proof of Proposition <ref> we see that there is an open subset U⊂^*(Z_r+s,E_r+s) and a section of _r+s such that if is any element of a 3–dimensional moduli space M_r,s-1(X,E;) then |_Z_r+s∈ U and (|_Z_r+s)≠0. Taking _1,r+s:= we have that all 0–dimensional moduli spaces M_r,s(X,E;) are empty. Reasoning as in the proof of Lemma <ref> we conclude that D_r,s=0.□ §.§ Lower bound on Recall Definition <ref> above. Given a space, X, a non-zero class w∈ H^2(X;)/torsion is called strongly admissible if some (hence every) lift of w to H^2(X;) maps to a strongly admissible class in H^2(X;/2). Let V be a smooth compact oriented connected 4-manifold whose boundary is a homology sphere Y. Suppose the intersection form of V is negative definite and at least one of the following two conditions holds: (i) H^2(V;) contains no 2–torsion. (ii) H^2(V;) contains no element of order 4, and w^2≢04. Furthermore, either w is strongly admissible or u_3=0 on I(Y) (or both). Let J:=H^2(V;)/torsion, and let w be an element of J which is not divisible by 2. Let k be the minimal square norm (with respect to the intersection form) of any element of w+2J. Let n be the number of elements of w+2J of square norm k. If k≥2 and n/2 is odd then equation (Y)≥k-1. Note that if we leave out case (ii) then the theorem says the same as Theorem <ref>. After performing surgery on a collection of loops in V representing a basis for H_1(V;)/ we may assume that b_1(V)=0. From the exact sequence eqn:2long-exact-seq we see that the 2–torsion subgroup of H^2(V;) is isomorphic to H^1(V;/2). Let X:=V∪(0,∞)× Y be the result of adding a half-infinite cylinder to V, and choose a Riemannian metric on X which is of cylindrical form over the end. We identify the (co)homology of X with that of V. Choose a complex line bundle L→ X whose Chern class represents w. Choose a Euclidean metric on the 3–plane bundle E:=⊕ L. Since we assume that H^2(X;) contains no element of order 4, it follows from Proposition <ref> that M_ℓ contains an odd number of reducibles for ℓ=k but no reducibles for 0<ℓ<k. We now show that if w^2≡0 (4), so that M_0 is defined, then M_0 is free of reducibles. Suppose A is a connection in E representing a reducible point in M_0. Then A preserves some orthogonal splitting E=⊕ L', where → X is a real line bundle. Because Condition (i) of the proposition must hold, the bundle is trivial. Choose a complex structure on L'. Since L' admits a flat connection, its Chern class c_1(L') is a torsion class in H^2(X;). But c_1(L) and c_1(L') map to the same element of H^2(X;/2), namely w_2(E), hence c_1(L)=c_1(L')+2a for some a∈ H^2(X;). This contradicts our assumption that w∈ J is not divible by 2. Thus, M_0 is free of reducibles as claimed. By Proposition <ref> we have D_k-2,0≠0, and Proposition <ref> says that D_k-2,0=u_2^k-2D_0,0. Now suppose w is strongly admissible (which is trivially the case if Condition (i) holds). Then the bundle E is strongly admissible, so by Propositions <ref> and <ref> we have u_3D_0,0=D_0,1=0. This proves eqn:q2ineq.□ § OPERATIONS DEFINED BY COBORDISMS §.§ Cutting down moduli spaces Let Y_0,Y_1,Y_2 be oriented (integral) homology 3–spheres and W a smooth compact connected oriented 4–manifold such that H_i(W;)=0 for i=1,2 and W=(-Y_0)∪(-Y_1)∪ Y_2. Then we call W a (4–dimensional) pair-of-pants cobordism from Y_0∪ Y_1 to Y_2, or a pair-of-pants cobordism from Y_1 to (-Y_0)∪ Y_2. We will consider various operations on Floer cochain complexes induced by pair-of-pants cobordism. To define these we first introduce some notation. Let X be an oriented connected Riemannian 4–manifold with incoming tubular ends (-∞,0]× Y_j, j=0,…,r and outgoing tubular ends [0,∞)× Y_j, j=r+1,…,r', where each Y_j is an homology sphere. For t≥0 let X t be the result of deleting from X the open pieces (-∞,-t)× Y_j, j=0,…,r and (t,∞)× Y_j, j=r+1,…,r'. We assume X0 is compact. For i=0,…,r' let y_i∈ Y_i be a base-point and set e_i:= -1, i=0,…,r, 1, i=r+1,…,r'. For any integers j,k in the interval [0,r'] such that j<k let _jk:→ X be a smooth path satisfying _jk(t)∈ X1 for |t|≤1 and _jk(t)= (-e_jt,y_j), t≤-1, (e_kt,y_k), t≥1. Loosely speaking, the path _jk enters along the jth end and leaves along the kth end of X. Let =(_1,…,_r'), where _j∈(Y_j) and at least one _j is irreducible. For the remainder of this subsection we write M:=M(X,E;), where E→ X is the product 3 bundle. The unique continuation result of Proposition prop:unique-continuation-cylinder ensures that if _j is irreducible then the restriction of any element of M to a band on the jth end of X will be irreducible. Let → M× X be the universal (real) 3–plane bundle (see <cit.>). For any t≥0 let t denote the restriction of to M× X t. Given a base-point x_0∈ X let _X,x_0;→ M be the canonical 3–plane bundle, which can be identified with the restriction of to M×{x_0}. If :J→ X is a smooth path in X defined on some interval J then a section of the pull-back bundle (𝕀×)^* over M× J is called holonomy invariant if for all =[A]∈ M and real numbers s<t one has that (,s) is mapped to (,t) by the isomorphism _(,(s))→_(,(t)) defined by holonomy of A along the path |_[s,t]. Suppose Z⊂ X is a compact codimension 0 submanifold-with-boundary such that A|_Z is irreducible for every [A]∈ M. Given a base-point z_0∈ Z, let _Z,z_0→^*(E|_Z) be the base-point fibration, and let R_Z:M→^*(E|_Z), ↦|_Z. Then the pull-back bundle R_Z^*_Z,z_0 is canonically isomorphic to _X,z_0;, and we will usually identify the two bundles without further comment. Choose (smooth) sections z_1,z_2,z_3 of 2 and for any x∈ X2 let M∩ w_3(x):= {∈ M z_1(,x)=0}, M∩ w_2(x):= {∈ M z_2,z_3 are linearly dependent at (,x)}. For j=0,…,r' let _j→^*(Y_j[0]) be the canonical 3–plane bundle associated to a base-point (0,y_j). For j<k, any j', and i=1,2,3 choose * a section ijk of _j and a section ijk of _k, * a section ijk of 2, * a section s_ij' of _j'. Let b_-1,b_0,b_1 be a partion of unity of subordinate to the open cover {(-∞,-1),(-2,2),(1,∞)}. If j<k and both _j,_k are irreducible we introduce, for i=1,2,3, a section of the bundle (𝕀×_jk)^* associated, loosely speaking, to a base-point moving along the path _jk. Precisely, we define s_ijk(,t):=b_-1(t) ijk(|_Y_j[-e_jt]) +b_0(t) ijk(|_X2,_jk(t)) +b_1(t) ijk(|_Y_k[e_kt]). Using these sections, we define cut-down moduli spaces M∩ w_3(_jk):= {(,t)∈ M× s_1jk(,t)=0}, M∩ w_2(_jk):= {(,t)∈ M× s_2jk, s_3jk are linearly dependent at (,t)}. We now consider the case of a base-point moving along the jth end. For t≥0 let _j(t):=(e_jt,y_j). If _j is irreducible let M∩ w_2(_j):={(,t)∈ M×[0,∞) s_2j,s_3j are linearly dependent at |_Y_j[e_jt]}. We omit the definition of M∩ w_3(_j) since it will not be needed in the remainder of this paper (although something close to it was used in the proof of Proposition <ref>). We can also combine the ways moduli spaces are cut down in the above definitions. Namely, for ℓ,ℓ'∈{2,3} let M∩ w_ℓ(x)∩ w_ℓ'(_jk):= {(,t)∈ M∩ w_ℓ'(_jk) ∈ M∩ w_ℓ(x)}, M∩ w_ℓ(_jk)∩ w_ℓ'(_j'k'):= {(,t,t')∈ M×× (,t)∈ M∩ w_ℓ(_jk), (,t')∈ M∩ w_ℓ'(_j'k')}, M∩ w_ℓ(_jk)∩ w_2(_j'):= {(,t,t')∈ M××[0,∞) (,t)∈ M∩ w_ℓ(_jk), (,t')∈ M∩ w_2(_j')}. If one of the _js is trivial, say _h=θ, and M<8 (to prevent bubbling) then one can also cut down M by, loosely speaking, evaluating w_2 or w_3 over the “link of θ at infinity” over the hth end of X. We now make this precise in the case of w_2 and an outgoing end [0,∞)× Y_h. The definitions for w_3 or incoming ends are similar. To simplify notation write Y:=Y_h. We introduce a function τ^+=τ^+_h on M related to the energy distribution of elements over the hth end. Choose >0 so small that for any β∈(Y) the Chern-Simons value (β)∈/ has no real lift in the interval (0,]. (Recall that we assume (θ)=0.) Given ∈ M, if there exists a t>0 such that _([t-2,∞)× Y)= then t is unique, and we write t^+():=t. This defines t^+ implicitly as a smooth function on an open subset of M. We modify t^+ to get a smooth function τ^+:M→[1,∞) by τ^+():= 1+b(t^+()-2)·(t^+()-1) if t^+() is defined, 1 else, where the cut-off function b is as in eqn:b-prop1. Note that τ^+()<3 if t^+()<3 and τ^+()=t^+() if t^+()≥3. The restriction of to the band Y[τ^+()] will be denoted by R^+()∈(Y[0]). In the above situation there is a real number T_0 such that if is any element of M satisfying τ^+()>T_0-1 then R^+() is irreducible. Suppose the lemma is false. Then we can find a sequence _n in M such that τ^+(_n)→∞ and R^+(_n) is reducible for every n. Let A_n be a smooth connection representing _n, and let t_n=τ^+(_n). By assumption, there is no bubbling in M, so we can find gauge transformations u_n defined over [0,∞)× Y and a smooth connection A' over such that, for every constant c>0, the sequence u_n(A_n)|_[t_n-c,t_n+c] converges in C^∞ to A'|_[-c,c]. The assumption on means that no energy can be lost over the end [0,∞)× Y in the limit, hence _A'([-2,∞)× Y)=. In particular, A' is not trivial. But there are no non-trivial reducible finite-energy instantons over (as long as the perturbation of the Chern-Simons functional is so small that there are no non-trivial reducible critical points). Therefore, A' must be irreducible. From the unique continuation result of Proposition <ref> it follows that A'|_{0}× Y is also irreducible, so A_n is irreducible for large n. This contradiction proves the lemma. □Let T_0 be as in the lemma. For any element of M for which R^+() is irreducible, let s'_ih() denote the holonomy invariant section of (𝕀×_h)^* such that s'_ih(,τ^+())=s_ih(R^+()). Let x_h:=(0,y_h) and define a section of _X,x_h; by s_ih():=(1-b(τ^+()-T_0))· z_i(|_X2,x_h) +b(τ^+()-T_0)· s'_ih(R^+()), where again b is as in eqn:b-prop1. Let M∩ w_2(τ^+):={∈ Ms_2h,s_3h linearly dependent at }. If j<k and both _j,_k are irreducible let M∩ w_ℓ(_jk)∩ w_2(τ^+):= {(,t)∈ M∩ w_ℓ(_jk)∈ M∩ w_2(τ^+)}. If M is regular, then the various cut down moduli spaces defined above will be transversely cut out when the sections involved are “generic”. §.§ Operations, I We now specialize to the case when X has two incoming ends (-∞,0]× Y_j, j=0,1 and one outgoing end [0,∞)× Y_2, and H_i(X;)=0, i=1,2. Such a cobordism gives rise to a homomorphism A:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q(Y_2) for any p,q∈/8, with matrix coefficients A(_0⊗_1),_2:=#M(X;) for generators _0∈ C^p(Y_0), _1∈ C^q(Y_1), and _2∈ C^p+q(Y_2), where =(_0,_1,_2). We can construct more homomorphisms using the sections s_ijk chosen above. For any path _jk as above and k=2,3 let T_i,j,k:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+i-1(Y_2) be defined on generators by T_i,j,k(_0⊗_1),_2:= #[M(X;)∩ w_i(_jk)]. For the cases used in this paper we introduce the simpler notation B:=T_3,0,1, E:=T_3,0,2, A':=T_2,1,2. We will also consider homomorphisms defined using two base-points, each moving along a path in X. At this point we only define B':C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+3(Y_2) by B'(_0⊗_1),_2:= #[M(X;)∩ w_3(_01)∩ w_2(_12)]. In the next proposition, the differential in the cochain complex C(Y_i) will be denoted by d (for i=0,1,2), and d=d⊗1+1⊗ d will denote the differential in C(Y_0)⊗ C(Y_1). Let v_3:=v_3⊗1+1⊗ v_3, regarded as a degree 3 cochain map from C(Y_0)⊗ C(Y_1) to itself. (i) dA+A d=0. (ii) dB+B d=A v_3. (iii) dE+E d=A(v_3⊗1)+v_3A. (iv) dA'+A' d=A(1⊗ v_2)+v_2A. (v) dB'+B' d=B(1⊗ v_2)+v_2B +A' v_3+A(1⊗ϕ)+A_θ(1⊗). The only non-trivial part here is (v), where one encounters factorization through the trivial connection over the end (-∞,0]× Y_1. This can be handled as in the proof of Proposition <ref> given in Subsection <ref>, to which we refer for details.□ The homomorphism :MC^*(Y_0)⊗ MC^*(Y_1) → C^*(Y_2), (x_0,y_0)⊗(x_1,y_1) ↦ B(x_0,x_1)+A(x_0⊗ y_1+y_0⊗ x_1) is a cochain map of degree -2. Let D=D⊗1+1⊗ D be the differential in the complex MC(Y_1)⊗ MC(Y_2). Then D[(x_0,y_0)⊗(x_1,y_1)] = [(dx_0,v_3x_0+dy_0)⊗(x_1,y_1)+(x_0,y_0)⊗(dx_1,v_3x_1+dy_1)] =B(dx_0⊗ x_1+x_0⊗ dx_1) +A[dx_0⊗ y_1+(v_3x_0+dy_0)⊗ x_1+ x_0⊗(v_3x_1+dy_1)+y_0⊗ dx_1] =B d(x_0⊗ x_1) +A[ v_3(x_0⊗ x_1)+ d(x_0⊗ y_1+y_0⊗ x_1)] =d[(x_0,y_0)⊗(x_1,y_1)], where the last equality follows from Proposition <ref>.□ The homomorphism MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) obtained from Proposition <ref> will also be denoted by . In order to simplify notation we will often write , instead of _0,_0 if no confusion can arise. For all a∈ MI(Y_0), b∈ MI(Y_1), the following hold. (i) If a=0 then (Ua,b)=u_2(a,b). (ii) If b=0 then (a,Ub)=u_2(a,b). We spell out the proof of (ii). Reversing the roles of Y_0,Y_1 yields a proof of (i). Let ',:MC^*(Y_0)⊗ MC^*(Y_1)→ C^*(Y_2) be given by '[(x_0,y_0)⊗(x_1,y_1)] := B'(x_0,x_1)+A'(x_0⊗ y_1+y_0⊗ x_1), [(x_0,y_0)⊗(x_1,y_1)] :=( x_1)A_θ(x_0). Let D be as in the proof of Proposition <ref>. We show that d'+' D=v_2+(1× V)+, from which (ii) follows. Observe that the first four lines in the calculation of D in Proposition <ref> carry over to ' D. That proposition then gives ' D [(x_0,y_0)⊗(x_1,y_1)] =(B' d+A' v_3)(x_0⊗ x_1) +A' d(x_0⊗ y_1+y_0⊗ x_1) =dB'(x_0⊗ x_1)+B(x_0⊗ v_2x_1)+v_2B(x_0⊗ x_1) +A(x_0⊗ϕ x_1)+( x_1)A_θ(x_0) +[dA'+A(1⊗ v_2)+v_2A](x_0⊗ y_1+y_0⊗ x_1) =[d'+v_2+(1× V)+][(x_0,y_0)⊗(x_1,y_1)].□ Our next goal is to compute u_2. To this end we introduce some variants Ȧ,Ḃ,A^+,B^+ of the operators A,B. Each of these variants is a homomorphism C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+d(Y_2) for d=2,4,1,3, respectively, defined for all p,q, and the matrix coefficients are Ȧ(_0⊗_1),_2 := #[M(X;)∩ w_2(x_2)], Ḃ(_0⊗_1),_2 := #[M(X;)∩ w_2(x_2)∩ w_3(_01)], A^+(_0⊗_1),_2 := #[M(X;)∩ w_2(_2)], B^+(_0⊗_1),_2 := #[M(X;)∩ w_3(_01)∩ w_2(_2)], where =(_0,_1,_2) as before, x_2=_2(0)∈ X, and _i,_ij are as in Subsection <ref>. (i) dȦ+Ȧ d=0. (ii) dḂ+Ḃ d=Ȧ v_3. (iii) dA^++A^+ d=v_2A+Ȧ. (iv) dB^++B^+ d=A^+ v_3+v_2B+Ḃ. Standard.□ The homomorphism :MC^*(Y_0)⊗ MC^*(Y_1) → C^*(Y_2), (x_0,y_0)⊗(x_1,y_1) ↦Ḃ(x_0,x_1) +Ȧ(x_0⊗ y_1+y_0⊗ x_1) is a (degree preserving) cochain map. The same as for Proposition <ref>, using Proposition <ref> (i), (ii).□ The homomorphism MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) obtained from Proposition <ref> will also be denoted by . As maps MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) one has =u_2. This is analogous to the proof of Proposition <ref>. Let ^+:MC^*(Y_0)⊗ MC^*(Y_1)→ C^*(Y_2) be given by ^+[(x_0,y_0)⊗(x_1,y_1)] := B^+(x_0,x_1)+A^+(x_0⊗ y_1+y_0⊗ x_1). We show that d^++^+ d=v_2+. From Proposition <ref> we get ^+ D(x_0,y_0)⊗(x_1,y_1) =(B^+ d+A^+ v_3)(x_0⊗ x_1)+A^+ d(x_0⊗ y_1+ y_0⊗ x_1) =(dB^++v_2B+Ḃ)(x_0⊗ x_1) +(dA^++v_2A+Ȧ)(x_0⊗ x_1) =(d^++v_2+)(x_0,y_0)⊗(x_1,y_1).□ We also need to bring in moduli spaces over X with trivial limit over the end _+× Y_2. These give rise to homomorphisms A^θ,B^θ,Ȧ^θ,Ḃ^θ:C^p(Y_0)⊗ C^d-p(Y_1)→/2 where d=5,3,3,1, respectively. They are defined on generators by A^θ(_0⊗_1) :=#M(_0,_1,θ), B^θ(_0⊗_1) :=#[M(_0,_1,θ)∩ w_3(_01)], Ȧ^θ(_0⊗_1) :=#[M(_0,_1,θ)∩ w_2(x_0), Ḃ^θ(_0⊗_1) :=#[M(_0,_1,θ)∩ w_2(x_0)∩ w_3(_01). (i) A+ A^θ d=0. (ii) B+B^θ d=A^θ v_3. (iii) Ȧ+Ȧ^θ d=0. (iv) Ḃ+Ḃ^θ d= Ȧ^θ v_3+⊗. Here, (⊗)(x_0⊗ x_1)=( x_0)( x_1). The proof is standard. (i) =0. (ii) u_2=⊗. Statement (i) is proved just as Proposition <ref>, replacing Proposition <ref> by Proposition <ref>. We now prove (ii). For g_i=(x_i,y_i)∈ MC(C_i), i=0,1 let ^θ(g_0⊗ g_1):=Ḃ^θ(x_0⊗ x_1) +Ȧ^θ(x_0⊗ y_1+y_0⊗ x_1). Arguing as in the proof of Proposition <ref> and using Proposition <ref> we obtain ^θ D(g_0⊗ g_1) =(Ḃ^θ d+Ȧ v_3)(x_0⊗ x_1) +Ȧ^θ d(x_0⊗ y_1+y_0⊗ x_1) =Ḃ(x_0⊗ x_1)+ x_0· x_1 +Ȧ(x_0⊗ y_1+y_0⊗ x_1) =(+⊗)(g_0⊗ g_1). If g_0,g_1 are cocycles then by Proposition <ref> we have v_2(g_0⊗ g_1)=(g_0⊗ g_1) = g_0· g_1.□ For p≠4 let F:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+4(Y_2) be defined by F(_0⊗_1),_2:= #[M(X;)∩ w_3(_01)∩ w_3(_02)]. For p=4 the map F may not be well-defined due to possible factorizations through the trivial connection over the end _-× Y_0. The definition of F involves two different sections of the bundle _0→^*(Y_0[0]), namely s_k:= 10k, k=1,2. From now on we assume s_1,s_2 are so close that they define the same cup product v_3:C^*(Y_0)→ C^*+3(Y_0). If the sections s_1,s_2 are sufficiently close then the map F in eqn:Fdef can be extended to all bidegrees (p,q) such that dF+F d=B(v_3⊗1)+v_3B+E v_3+A(ψ⊗1), where ψ is as in Proposition <ref>. The main difficulty in extending the map F to degree p=4, related to factorization through the trivial connection over the end (-∞,0]× Y_0, is the same as in extending the map ψ to degree 4, and the main difficulty in proving eqn:Fthm is the same as in proving that ψ is a cochain map (Proposition <ref>). As we prefer to explain the ideas involved in the simplest possible setting, we will not spell out the proof of Proposition <ref> but instead refer to Subsection <ref> for details. Sometimes we will fix the variable _1 in the expressions defining A,B,E,F. Thus, for any y∈ C^r(Y) we define a homomorphism A_y:C^*(Y_0)→ C^*-r(Y_2), x↦ A(x⊗ y), and we define B_y,E_y,F_y similarly. Looking at moduli spaces over X with trivial limit over the end _-× Y_1 we obtain homomorphisms A_θ :C^*(Y_0)→ C^*(Y_2), E_θ :C^*(Y_0)→ C^*+2(Y_2). with matrix coefficients A_θ(_0),_2 :=#M(X;_0,θ,_2), E_θ(_0),_2 :=#[M(X;_0,θ,_2)∩ w_3(_02)]. We consider a variant of Floer's complex introduced by Donaldson <cit.>. For any oriented homology 3–sphere Y let *(Y) be the complex with cochain groups p(Y) =C^p(Y), p≠0, 0(Y) =C^0(Y)⊕/2 and differential d̅=d+'. Now take Y:=Y_1. For y=(z,t)∈0(Y_1) let A_y:=A_z+tA_θ, E_y:=E_z+tE_θ. For any x∈ C(Y_1) and y∈*(Y_1) we have [d,A_y]+A_d̅y =0, [d,E_y]+E_d̅y =[A_y,v_3], [d,B_x]+B_dx =A_xv_3+A_v_3x, [d,F_x]+F_dx =[B_x,v_3]+E_xv_3+E_v_3x+A_xψ. Here, [d,A_y]=dA_y+A_yd, and similarly for the other commutators. For y∈ C(Y_1) this follows from Propositions <ref> and <ref>, whereas the case y=(0,1)∈0(Y_1) is easy.□ Suppose x∈ C^-2(Y_1) and y=(z,t)∈0(Y_1) satisfy dx=0, v_3x=d̅y. Then the homomorphism :MC^*(Y_0)→ MC^*(Y_2) given by the matrix ( [ A_y+B_x A_x; E_y+F_x+A_xΞ A_y+B_x+E_x+A_xv_2 ]) is a cochain map. Writing =([ P Q; R S ]) we have d+ d=( [ dP+Pd+Qv_3 dQ+Qd; dR+Rd+v_3P+Sv_3 dS+Sd+v_3Q ]). The fact that this matrix vanishes is easily deduced from Propositions <ref> and <ref> and Lemma <ref>. We write out the calculation only for the bottom left entry. [d,E_y +F_x+A_xΞ] =E_v_3x+[v_3,A_y]+[v_3,B_x]+E_v_3x+E_xv_3+A_xψ+A_x[d,Ξ] =v_3(A_y+B_x)+(A_y+B_x+E_x+A_xv_2)v_3, hence [d,R]=v_3P+Sv_3 as claimed.□ As maps MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) one has u_3=0. For j=0,1 let (x_j,y_j) be a cocycle in MC(Y_j), i.e. dx_j=0, v_3x_j=dy_j. Let the map of Lemma <ref> be defined with x=x_1, y=y_1, and let (x_2,y_2):=(x_0,y_0). Then ((x_0,y_0)⊗(x_1,y_1))=B_x_1(x_0)+A_y_1(x_0)+A_x_1(y_0)=x_2. Since (x_2,y_2) is a cocycle, we have v_3x_2=dy_2, proving the proposition. □ If (Y_j)≥1 for j=0,1 then (Y_2)≥(Y_0)+(Y_1). For j=0,1 let n_j:=(Y_j) and choose z_j∈ MI(Y_j) such that U^kz_j=cases 0 for 0≤ k<n_j-1, 1 for k=n_j-1. Let x:=(z_0⊗ z_1)∈ I(Y_2). Then u_3x=0 by Proposition <ref>. For 0≤ k_j≤ n_j-1, repeated application of Proposition <ref> yields u_2^k_0+k_1x=(U^k_0z_0⊗ U^k_1z_1), hence u_2^k_0+k_1x=0 by Proposition <ref>. Therefore, u_2^mx=0, 0≤ m≤ n_1+n_2-2. On the other hand, u_2^n_1+n_2-1x = u_2u_2^n_0-1u_2^n_1-1x = u_2(U^n_0-1z_0⊗ U^n_1-1z_1) =( U^n_0-1z_0)( U^n_1-1z_1) =1. Therefore, (Y_2)≥ n_0+n_1 as claimed.□ We will give a second application of Lemma <ref>, but first we need some preparation. Let A^θ_θ:C^5(Y_0)→/2 be defined on generators by A^θ_θ():=#M(,θ,θ). For y=(z,t)∈ q(Y_1) define A^θ_y:C^5-q(Y_0)→/2 and B^θ_z:C^3-q(Y_0)→/2 by A^θ_y(x):=A(x⊗ z)+tA^θ_θ(x), B^θ_z(x):=B^θ(x⊗ z). (i) A_θ+A^θ_θ d+A^θ_'(1)=. (ii) A_y+A^θ_y d+A^θ_d̅y=t. (iii) B_z+B^θ_z d+B^θ_dz=A^θ_zv_3+A^θ_v_3z. If (Y_0)≥1 and (Y_1)=0 then (Y_2)≥1. Since (Y_0)≥1 we can find (x_0,y_0)∈ MC^6(Y_0) such that dx_0=0, v_3x_0=dy_0, x_0=1. Since (Y_1)=0, Lemma <ref> says that there exist x_1∈ C^-2(Y_1) and y_1=(z_1,1)∈ 0(Y_1) such that dx_1=0, v_3x_1=d̅y_1. Let be as in Lemma <ref>. Then (x_0,y_0) is a cocycle in MC(Y_2), and by Lemma <ref> we have (x_0,y_0) =(A_y_1+B_x_1)x_0+ A_x_1y_0 =(_d̅y_1++_x_1v_3+_v_3x_1)x_0+_x_1dy_0 =1. Therefore, (Y_2)≥1.□ §.§ Operations, II We now consider the case when X has one incoming end (-∞,0]× Y_0 and two outgoing ends [0,∞)× Y_1 and [0,∞)× Y_2, where Y_2==(2,3,5) is the Poincaré homology sphere oriented as the boundary of the negative definite E_8–manifold. We again assume that H_i(X;)=0, i=1,2. We will define homomorphisms P,P',Q:C^*(Y_0)→ C^*+d(Y_1) where d=2,3,4, respectively, making use of cut-down moduli spaces introduced at the end of Subsection <ref> with h=2, so that τ^+=τ^+_2. We define P,P',Q on generators by P_0,_1 :=#[M(X;_0,_1,θ)∩ w_2(τ^+)], P'_0,_1 := #[M(X;_0,_1,θ)∩ w_2(_01)∩ w_2(τ^+)], Q_0,_1 := #[M(X;_0,_1,θ)∩ w_3(_01)∩ w_2(τ^+)]. As maps C(Y_0)→ C(Y_1) the following hold. (i) [d,P]=0. (ii) [d,P']=[v_2,P]. (iii) [d,Q]=[v_3,P]+'. (iv) P+Pd=. Here, is as defined at the end of Subsection <ref>. In (iii), argue as in the proof of Proposition <ref> to handle factorization through the trivial connection over X.□ Note that statements (i), (iii) are equivalent to the fact that the homomorphism Ψ= ([ P 0; Q P ]) :MC^*(Y_0)→ MC^*+2(Y_1) satisfies [D,Ψ]='. The homomorphism I^*(Y_0)→ I^*+2(Y_1) induced by P will also be denoted by P. As maps I(Y_0)→ I(Y_1) the following hold. (i) [u_2,P]=0. (ii) [u_3,P]='. (iii) P= u_2. Combine Propositions <ref> and <ref>.□ If (Y_0)≥2 then (Y_1)≥(Y_0)-1. Let n:=(Y_0) and choose x∈ I(Y_0) such that u_3x=0 and u_2^kx= 0 for 0≤ k<n-1, 1 for k=n-1. By Proposition <ref> we have u_3Px=0 and u_2^kPx= Pu_2^kx= u_2^k+1x= 0 for 0≤ k<n-2, 1 for k=n-2. This shows that (Y_1)≥ n-1.□ §.§ Additivity of Throughout this subsection, Y,Y_0,Y_1 will denote oriented homology 3–spheres. As before, will denote the Poincaré homology sphere. If (Y_j)≥1 for j=1,2 then (Y_0# Y_1)≥(Y_0)+(Y_1). Recall that there is a standard cobordism W from (-Y_0)∪(-Y_1) to Y_0# Y_1. By attaching half-infinite tubular ends to W we obtain a manifold X to which we can apply the results of Subsection <ref>. The proposition now follows from Proposition <ref>.□ If (Y_0)≥1 and (Y_1#(-Y_0))=0 then (Y_1)≥1. This follows from Proposition <ref>.□ If (Y#)≥2 then (Y)≥(Y#)-1. This follows from Proposition <ref> with Y_0=Y# and Y_1=Y. □ In the following, we write Y_0∼ Y_1 to indicate that Y_0 and Y_1 are homology cobordant. If Y_0# Y_1∼ then (Y_0)+(Y_1)=1. Let k_j:=(Y_j). Case 1: n_0n_1=0. Without loss of generality we may assume that n_1=0. By Proposition <ref> we have n_0≥1. If n_0≥2 then, since Y_0∼#(-Y_1), Proposition <ref> would give -n_1=(-Y_1)≥(#(-Y_1)-1≥1, a contradiction. Hence, n_0=1, so the lemma holds in this case. Case 2: n_0n_1>0. We show that this cannot occur. If k_j>0 then Proposition <ref> yields 1=()≥ n_0+n_1≥2, a contradiction. Similarly, if k_j<0 then the same proposition yields -1=(-)≥2. Case 3: n_0n_1<0. Then we may assume that n_0>0. Applying Proposition <ref> we obtain n_0=(#(-Y_1))≥1-n_1≥2. Proposition <ref> now gives -n_1≥ n_0-1. Altogether, this shows that n_0+n_1=1.□ (Y#)=(Y)+1. Apply the lemma with Y_0=Y# and Y_1=-Y.□ For any oriented integral homology 3–spheres Y_0,Y_1 one has (Y_0# Y_1)=(Y_0)+(Y_1). Let k_j:=(Y_j) and Z_j:=Y_j#(-k_j). By Corollary <ref> we have (Z_j)=0, so by Proposition <ref>, 0=(Z_0# Z_1)=(Y_0# Y_1#(-n_0-n_1))=(Y_0# Y_1)-n_0-n_1. □ § FURTHER PROPERTIES OF . EXAMPLES §.§ Proof of Theorem <ref> Let W' be the result of connecting the two boundary components of W by a 1–handle. Then W and W' have the same second cohomology group and the same intersection form. Let Z be the negative definite E_8–manifold (i.e. the result of plumbing on the E_8 graph), so that the boundary of Z is the Poincaré sphere . We will apply Theorem <ref> to the boundary-connected sum V:=W'#_∂ Z. Let S,S'⊂ Z be embedded oriented 2–spheres corresponding to adjacent nodes on the E_8 graph. These spheres both have self-intersection number -2, and S· S'=1. Let v=P.D.([S])∈ H^2(V, V)≈ H^2(V) be the Poincaré dual of the homology class in V represented by S. Then v·[S']=1, hence v is strongly admissible. The class w∈ J_V represented by v satisfies w^2=-2, and ± w are the only classes in w+2J_V with square norm 2. Theorem <ref> and Proposition <ref> now yield (Y)+1=(Y#)≥1, hence (Y)≥0 as claimed.□ §.§ Proof of Theorem <ref> Theorem <ref> is an immediate consequence of the following two propositions. Let K,K' be knots in S^3 such that K' is obtained from K by changing a positive crossing. Let Y,Y' be (-1) surgeries on K,K', respectively. Then 0≤(Y')-(Y)≤1. We observe that Y' is obtained from Y by (-1) surgery on a linking circle of the crossing such that bounds a surface in Y of genus 1. The surgery cobordism W from Y to Y' satisfies H_1(W;)=0 and b^+_2(W)=0, hence (Y')≥(Y) by Theorem <ref>. Since Y bounds a simply-connected negative definite 4–manifold (the trace of the surgery on K) we have (Y)≥0 by the same theorem. Let Y” be 0–surgery on . By Floer's surgery theorem <cit.> there is a long exact sequence ⋯→ I(Y”)→ I(Y)ϕ→ I(Y')ψ→ I(Y”)→⋯ where ϕ is induced by the cobordism W. Let n:=(Y') and suppose n≥2, the proposition already being proved for n=0,1. Then there is a b∈ I(Y') such that u_2^jb= 0, 0≤ j<n-1, 1, j=n-1. By Proposition <ref> we have ψ u_2b=u_2ψ b=0, hence u_2b=ϕ a for some a∈ I(Y). For j≥0 we have u_2^j a= u_2^jϕ a= u_2^j+1 b. Combining this with Corollary <ref> we obtain (Y)≥ n-1=(Y')-1 and the proposition is proved.□ If Y is (-1) surgery on a positive knot K in S^3 then (Y)=0. This follows from Theorem <ref> because Y bounds simply-connected 4–manifolds V_± where V_+ is positive definite and V_- is negative definite. As V_- one can take the trace of the (-1) surgery on K. On the other hand, since K can be unknotted by changing a collection of positive crossings, the observation in the beginning of the proof of Proposition <ref> yields V_+.□ §.§ Proof of Proposition <ref> Let Y_k:=(2,2k-1,4k-3). Then Y_k bounds the simply-connected 4–manifold V_k obtained by plumbing according the weighted graph in Figure 1, where the total number of nodes is 4k. Let e_1,…,e_4k be an orthonormal basis for ^4k. The intersection form of V_k is isomorphic to the lattice _4k:= {∑_i x_ie_i 2x_i∈, x_i-x_j∈, ∑_i x_i∈2}, with the nodes of the plumbing graph corresponding to the following elements of _4k: 1/2∑_i=1^4ke_i, e_2+e_3, (-1)^j(e_j-1-e_j), j=3,…,4k. Let w∈ J_k=H^2(V_k;) be the element corresponding to 1/2∑_i=1^4ke_i. Since ± w are the only elements of minimal square norm in w+2J_k it follows from Theorem <ref> that (Y_k)≥ k-1. On the other hand, Y_k is also the result of (-1) surgery on the torus knot T_2,2k-1. Since T_2,2k-1 can be unknotted by changing k-1 crossings we deduce from Theorem <ref> that (Y_k)≤ k-1. This proves the proposition.□ §.§ Proof of Theorem <ref> Since we will use different coefficient rings R, the homomorphism :C^4(Y;R)→ R defined in Subsection <ref> will now be denoted by _R. By definition, the condition h(Y)>0 means that there exists a cocycle w∈ C^4(Y;) such that _ w≠0. Note that replacing the coefficient group by yields an equivalent condition. On the other hand, the condition (Y)>0 means that there exists a cocycle z∈ C^4(Y;/2) such that _/2z≠0 and such that the cohomology class of z is annihilated by u_3. If in addition z lifts to an integral cocycle z∈ C^4(Y;) then _ z must be odd, in particular non-zero, hence h(Y)>0. Now suppose (Y)>0 and h(Y)≤0. The above discussion shows that the homomorphism I^4(Y;)→ I^4(Y;/2) is not surjective, hence the Bockstein homomorphism I^4(Y;/2)→ I^5(Y;) is non-zero. This proves the theorem.□ §.§ Proofs of Theorems <ref> and <ref> Proof of Theorem <ref>: Part (i) was proved in <cit.> using Seiberg-Witten theory. To prove (ii), let =(2,3,5). Then ()=1 by Proposition <ref>. If H^2(X;) contains no 2–torsion then (ii) follows from Corollary <ref>. Under the weaker assumption that H^2(X;) contains no element of order 4, we can appeal to Theorem <ref> since u_3=0 on I().□ Proof of Theorem <ref>: Let be the monopole h–invariant defined in <cit.>. (One could equally well use the correction term d.) Then ()=-1, and additivity of yields (#)=-2. If ξ is any characteristic vector for J_X then by <cit.> one has -(Y)≥1/8(b_2(X)+ξ·ξ). Let J_X=m-1⊕ J_X as in Corollary <ref>. By assumption, J_X is even, so J_X has characteristic vectors ξ with ξ·ξ=-m. Therefore, J_X=b_2(X)-m≤16. By the classification of even unimodular definite forms of rank ≤16 (see <cit.>) one has J_X=0, -E_8, -2E_8, or -_16. It only remains to rule out J_X=-_16. Recalling that is the result of (-1) surgery on the negative trefoil knot and applying Proposition <ref> twice we find that u_2^2=0 on I^*(#), hence (#)≤2. On the other hand, if J_X=-_16 then applying Theorem <ref> as in the proof of Proposition <ref> we would obtain (#)≥3, a contradiction. This proves the theorem.□ § TWO POINTS MOVING ON A CYLINDER, I The main goal of this section is to prove Proposition <ref>. The first two subsections will introduce some concepts used in the proof, which appears in the final subsection. §.§ Energy and holonomy Let Y be an oriented (integral) homology 3–sphere with base-point y_0. Let →^*(Y[0]) be the canonical oriented Euclidean 3–plane bundle, where Y[0]=[-1,1]× Y as in eqn:ybt-def. Let ,β∈(Y), not both reducible. Over M(,β)× there is a canonical 3–plane bundle β obtained by pulling back the universal bundle over M(,β)×× Y by the map (,t)↦(,t,y_0). There is a canonical isomorphism β→ R^* where R:M(,β)×→^*(0), (,t)↦[t], so we can identify the fibre of β at (,t) with the fibre _[t] of at [t]. Recall from Subsection <ref> that a section of β is called holonomy invariant if for all =[A]∈ and real numbers s<t one has that (,s) is mapped to (,t) by the isomorphism equation* _[s]→_[t]. defined by holonomy of A along the path [s,t]×{y_0}. Let be the set of elements of ^*(0) that can be represented by flat connections. Choose three sections ρ_1,ρ_2,ρ_3 of which form a positive orthonormal basis at every point in some neighbourhood of . Choose >0 so small that the following three conditions hold: description (i)If A is any instanton over (-∞,2]× Y satisfying A(-∞,2]< such that the flat limit of A is irreducible then ρ_1,ρ_2,ρ_3 are orthonormal at A[0]. (ii)If A is any instanton over [-2,∞)× Y satisfying A[-2,∞)< such that the flat limit β of A is irreducible then ρ_1,ρ_2,ρ_3 are orthonormal at A[0]. (iii)For each pair ,β∈(Y) the difference ()-(β)∈/ has no real lift in the half-open interval (0,2]. Here, _A refers to the energy of A as defined in eqn:def-energy. Let ,β be distinct elements of (Y). If [A]∈ M(,β) then _A()>2, since the left hand side is a positive real lift of ()-(β). We can therefore define smooth functions τ^-,τ^+:M(,β)→ implicitly by _A((-∞,τ^-(A)+2])= =_A([τ^+(A)-2,∞)). We will consider the average and difference τ_a:=1/2(τ^++τ^-), τ_d:=τ^+-τ^-. Clearly, τ_d>0. There are translationary invariant smooth restriction maps R^±:M(,β)→^*(0), ↦[τ^±()] which, by the unique continuation result of Proposition prop:unique-continuation-cylinder, descend to injective maps Ř^±:(,β)→^*(0). If is irreducible then for any =[A]∈ M(,β) the vectors equation ρ_i(R^-()), i=1,2,3 form an orthonormal basis for _R^-(), by choice of . Let ρ^-_i be the holonomy invariant section of β whose value at (,τ^-()) is ρ_i(R^-()). Similarly, if β is irreducible, then the vectors ρ_i(R^+()) form an orthonormal basis for _R^+(). Let ρ^+_i be the holonomy invariant section of β whose value at (,τ^+()) is ρ_i(R^+()). If ,β are both irreducible let h=(h_ij):M(,β)→3 be the map whose value at [A] is the holonomy of A along [τ^-(A),τ^+(A)]×{y_0} with respect to the bases described above, so that ρ^-_j(,t)=∑_ih_ij()ρ^+_i(,t). §.§ Factorization through the trivial connection Now assume ()=4, (β)=1. We will introduce real valued functions ^± on M(,β) which measure the extent to which a given element factors through the trivial connection over Y. Set M_,θ:=R^-(M(,θ)), which is a finite subset of ^*(0). Let M_ be the union of all subsets R^-(M(,β'))⊂^*(0) where β'∈^*(Y) and M(,β')≤4. Note that M_ is compact. Choose an open neighbourhood U_ of M_,θ in ^*(0) such that itemize * the closure of U_ is disjoint from M_, * U_ is the disjoint union of open sets U_,i, i=1,…,r, each of which contains exactly one point from M_,θ. Choose a closed neighbourhood U'_ of M_,θ contained in U_ and a smooth function equation e_:→[0,∞) such that e_=1 on U'_ and e_=0 outside U_. Define the translationary invariant function λ^-:M(,β)→[0,∞), ↦ e_(R^-())·τ_d(). The function ^+ is defined in a symmetrical fashion (corresponding to reversing the orientation of Y). Let M_β be the union of all subsets R^+(M(',β))⊂^*(0) where '∈^*(Y) and M(',β)≤4. Choose an open neighbourhood V_β of M_θ,β:=R^+(M(θ,β) in ^*(0) such that the closure of V_β is disjoint from M_β, and such that V_β is the disjoint union of open sets V_β,j, j=1,…,s, each of which contains exactly one point from M_θ,β. Choose a closed neighbourhood V'_β of M_θ,β contained in V_β and a smooth function e_β:→[0,∞) such that e_β=1 on V'_β and e_β=0 outside V_β. Set λ^+:M(,β)→[0,∞), ↦ e_β(R^+())·τ_d(). lemma There is a constant C<∞ such that for any ∈ M(,β) satisfying ^-()+^+()>C one has ^-()=^+(). Suppose the lemma does not hold. Then one can find a sequence _n in M(,β) such that ^-(_n)+^+(_n)→∞ and ^-(_n)≠^+(_n). After passing to a subsequence we may assume that the sequence _n chain-converges. If the chain-limit lay in (,β), or if the chain-limit involved factorization through an irreducible critical point, then ^±(_n) would be bounded. Therefore, the chain-limit must lie in (,θ)×(θ,β) and, consequently, ^-(_n)=τ_d(_n)=^+(_n) for n≫0, a contradiction.□ In the course of the proof we also obtained the following: lemma For a chain-convergent sequence _n in M(,β) the following are equivalent: description (i) λ^-(_n)→∞. (ii) λ^+(_n)→∞. (iii) The chain-limit of _n lies in (,θ)×(θ,β).□ Since ^+ will not appear again in the text, we set :=^- to simplify notation. For any real number T set _=T:={∈()=T}. Given ∈ M(,β), one has R^-()∈ U_ if ()>0 (by definition of ), and R^+()∈ V_β if ()≫0 (by Lemma <ref>). Therefore, if ()≫0 then there is a map d:M(,β)_=T→(,θ)×(θ,β) characterized by the fact that if d()=(_1,_2) then R^-() and Ř^-(_1) lie in the same set U_,i, and R^+() and Ř^+(_2) lie in the same set V_β,j. Gluing theory (see <cit.>) provides the following result: lemma There is a T_0>0 such that for any T≥ T_0 the map d× h×τ_a: _=T→((,θ)×(θ,β))×3× is a diffeomorphism.□ §.§ Proof of Proposition <ref> Let ,β∈^*(Y) with (β)-()≡58. To compute the matrix coefficient (v_2v_3+v_3v_2),β we distinguish between two cases. If ()≢48 the calculation will consist in counting modulo 2 the number of ends of the 1-manifold 23(,β). If ()≡48 then M(,β) may contain sequences factoring through the trivial connection over Y. To deal with this we consider the subspace of M(,β)× consisting of points (,t) with ()≤ T for some large T. By carefully cutting down this subspace to a 1-manifold and then counting the number of ends and boundary points modulo 2 we obtain eqn:v2v3chhom. For s∈ we define the translation map _s:→, (t,y)↦(t+s,y). Part (I) Suppose ()≢48. Then no sequence in M(,β) can have a chain-limit involving factorization through the trivial connection. We will determine the ends of the smooth 1-manifold 23(,β). Let (_n,t_n) be a sequence in 23(,β). After passing to a subsequence we may assume that the following hold: description (i) The sequence ^*_-t_n(_n) converges over compact subsets of to some ^-∈ M(^-,β^-). (By this we mean that there are connections A_n,A̅ representing _n,^- respectively, such that A_n→A̅ in C^∞ over compact subsets of .) (ii) The sequence ^*_t_n(_n) converges over compact subsets of to some ^+∈ M(^+,β^+). (iii) The sequence t_n converges in [-∞,∞] to some point t_∞. Here, [-∞,∞] denotes the compactification of the real line obtained by adding two points ±∞. Suppose (_n,t_n) does not converge in 23(,β). Case 1: t_∞ is finite. Then M(^-,β^-) has dimension 4 and either ^-= or β^-=β. The corresponding number of ends of 23(,β), counted modulo 2, is (dϕ+ϕ d),β. Case 2: t_∞=∞. Let n^± be the dimension of M(^±,β^±). Because s_1(^-[0])=0, s_2(^+[0])∧ s_3(^+[0])=0 we must have n^-≥3 and n^+≥2. On the other hand, n^-+n^+≤ M(,β)=5, so n^-=3, n^+=2. It follows that =^-, β^-=^+, β^+=β. The corresponding number of ends of 23(,β) is v_2v_3,β modulo 2. Case 3: t_∞=-∞. Arguing as in Case 2 one finds that the number of such ends of 23(,β) is v_3v_2,β modulo 2. Since the total number of ends of 23(,β) must be zero modulo 2, we obtain the equation eqn:v2v3chhom in the case ()≢48. Part (II) Now suppose ()≡48. We will again make use of a cut-off function b as in eqn:b-prop1 in Subsection <ref>, but we now impose two further conditions, namely b(0)=1/2, b'(t)>0 for -1<t<1. Set c:×→, (,t)↦ b(t-τ_a()). Choose generic 3×3 matrices A^+=(a^+_ij) and A^-=(a^-_ij) and for j=1,2,3 define a section ρ_j of the bundle R^* over M(,β)× by ρ_j:=(1-c)∑_ia^-_ijρ^-_i+c∑_ia^+_ijρ^+_i. Define a function g:M(,β)×→[0,1] by g(,t):=b(()-1)· b(τ^+()-t)· b(t-τ^-()). For j=1,2,3 we now define a section s_j of R^* by s_j(,t):=(1-g(,t))· s_j([t])+g(,t)·ρ_j(,t). defn Let 23(,β) be the subspace of × consisting of those points (,t) that satisfy the following conditions: itemize * s_1(,-t)=0, * s_2(,t) and s_3(,t) are linearly dependent. To understand the ends of 23(,β) we will need to know that certain subspaces of M(,θ) and M(θ,β), respectively, are “generically” empty. These subspaces are defined as follows. For ∈ M(,θ) and j=1,2,3 let s_j():=(1-b(-τ^-()))· s_j([0])+b(-τ^-()) ∑_ia^-_ijρ^-_i(,0), and for ∈ M(θ,β) let s_j():=(1-b(τ^+()))· s_j([0])+b(τ^+()) ∑_ia^+_ijρ^+_i(,0). Set M_2(,θ) :={∈ M(,θ) s_2()∧ s_3()=0}, M_3(,θ) :={∈ M(,θ) s_1()=0}. Replacing (,θ) by (θ,β) in the last two definitions we obtain subspaces M_k(θ,β) of M(θ,β). For k=2,3, each of the spaces M_k(,θ) and M_k(θ,β) has expected dimension 1-k and is therefore empty for “generic” choices of sections s_j and matrices A^±. There is a constant C_0<∞ such that for all (,t)∈23(,β) one has |t|≤min(-τ^-(),τ^+())+C_0. We must prove that both quantities |t|+τ^-() and |t|-τ^+() are uniformly bounded above for (,t)∈23(,β). The proof is essentially the same in both cases, so we will only spell it out in the first case. Suppose, for contradiction, that (_n,t_n) is a sequence in 23(,β) with |t_n|+τ^-(_n)→∞. After passing to a subsequence we may assume that the sign of t_n is constant, so |t_n|=-et_n for some constant e=±1. Then [et_n]→ by exponential decay (see <cit.>), and s_j(,et_n)=s_j(_n[et_n]) for n≫0. If e=1 then this gives 0=s_2(_n[t_n])∧ s_3(_n[t_n])→ s_2()∧ s_3(), as n→∞, whereas if e=-1 we get 0=s_1(_n[-t_n])→ s_1(). However, for “generic” sections s_j, both s_2()∧ s_3() and s_1() are non-zero. This contradiction proves the lemma. □ For any constant C_1<∞ there is constant L>0 such that for all (,t)∈23(,β) satisfying ()≥ L one has |t|≤min(-τ^-(),τ^+())-C_1. Suppose to the contrary that there is a constant C_1<∞ and a sequence (_n,t_n) in 23(,β) such that (_n)→∞ and |t_n|>min(-τ^-(_n),τ^+(_n))-C_1. After passing to a subsequence we may assume that at least one of the following two conditions holds: (i) |t_n|>-τ^-(_n)-C_1 for all n, (ii) |t_n|>τ^+(_n)-C_1 for all n. The argument is essentially the same in both cases, so suppose (i) holds. By Lemma <ref> we also have |t_n|≤-τ^-(_n)+C_0, hence the sequence τ^-(_n)+|t_n| is bounded. Since (_n)→∞ we have τ_d(_n)→∞, so τ^+(_n)+|t_n|=τ_d(_n)+(τ^-(_n)+|t_n|)→∞. After passing to a subsequence we may assume that * the sequence _n chain-converges; * the sequence τ^-(_n)+|t_n| converges to a real number; * |t_n|=-et_n for some constant e=±1. From Lemma <ref> we deduce that '_n:=^*_et_n_n converges over compact subsets of to some ∈ M(,θ). For large n we have c(_n,et_n)=0 and g(_n,et_n)=b(et_n-τ^-(_n))=b(-τ^-('_n))→ b(-τ^-()). For j=1,2,3 we now get s_j(_n,et_n)→ s_j(). But then lies in M_2(,θ) (if e=1) or in M_3(,θ) (if e=-1), contradicting the fact that the latter two spaces are empty.□ Choose L≥2 such that for all (,t)∈23(,β) with ()≥ L one has |t|≤min(-τ^-(),τ^+())-1, which implies that s_j(,t)=ρ_j(,t). Set 23(,β):={(,t)∈23(,β)()≥ L}. We will show that 23(,β) is transversely cut and therefore a one-manifold with boundary, and determine the number of boundary points and ends modulo 2. We will see that the number of ends is given by the same formula as in Part (I), whereas the boundary points contribute the new term ' of eqn:v2v3chhom. Ends of 23(,β): Let (_n,t_n) be a sequence in 23(,β). After passing to a subsequence we may assume that (i),(ii), (iii) of Part (I) as well as the following hold: description (iv) The sequence _n is chain-convergent. (v) The sequence τ_a(_n) converges in [-∞,∞]. (vi) Either (_n)>0 for all n, or (_n)=0 for all n. Suppose (_n,t_n) does not converge in 23(,β). Case 1: (_n)=0 for all n. Then g(_n,t_n)=0 and therefore s_j(_n,t_n)=s_j(_n[t_n]). This case is similar to Part (I) and the corresponding number of ends of 23(,β), counted modulo 2, is (v_2v_3+v_3v_2+dϕ+ϕ d),β, where ϕ is defined as before. Case 2: (_n)>0 for all n. We show this is impossible. By definition of the chain-limit of _n must lie in (,β), so τ_d(_n) is bounded. By Lemma <ref>, the sequence τ^-(_n) is bounded above whereas τ^+(_n) is bounded below, hence both sequences must be bounded. Applying Lemma <ref> again we see that t_n is bounded. Therefore, both sequences τ_a(_n) and t_n converge in , so (_n,t_n) converges in M(,β)× and hence in 23(,β), which we assumed was not the case. Boundary points of 23(,β): Let M=M(3,) be the space of all 3×3 real matrices, and let U⊂ M be the open subset consisting of those matrices B satisfying B_1≠0, B_2∧ B_3≠0, where B_j denotes the jth column of B. Then M∖ U is the union of three submanifolds of codimension at least two, hence U is a connected subspace and a dense subset of M. Let F:3××× U× U →^3×^3×^3, (H,v,w,B^+,B^-) ↦(F_1,F_2,F_3), where F_1 =(1-b(v))HB^-_1+b(v)B^+_1, F_j =(1-b(w))HB^-_j+b(w)B^+_j, j=2,3. Then F is a submersion, so F(0,0,0) is empty. Moreover, the set Z:=F({0}× L(^3), consisting of those points in the domain of F for which F_1=0, F_2∧ F_3=0, is a codimension 5 submanifold and a closed subset of 3×^2× U^2. The projection π:Z→ U^2 is a proper map whose mod 2 degree is _2(π)=1. The equations eqn:FFF imply -1<v,w<1, hence π is proper. To compute its degree, let e_1,e_2,e_3 be the standard basis for ^3 and let B^± be given by B^-_1=B^-_2=e_1, B^-_3=e_2, B^+_1=-e_1, B^+_2=e_1, B^+_3=-e_2. We show that the preimage Z':=π(B^+,B^-) consists of precisely one point. Suppose (H,v,w)∈ Z'. Because 0≤ b≤1, the equation F_1=0 implies b(v)=1/2 and hence v=0, He_1=e_1, F_2=e_1. Because He_2⊥ e_1, the vectors F_2,F_3 are linearly dependent if and only if F_3=0, which yields w=0, He_2=e_2. Thus, Z'={(I,0,0)}, where I is the identity matrix. Using the fact that f(I,0,0)=(0,e_1,0) and that the tangent space to L^*(^3) at (e_1,0) is ^3×{0}+ e_1 it is easy to see that the map F( · , · , · ,B^+,B^-):3××→^9 is transverse to {0}× L^*(^3) at (I,0,0), or equivalently, that (B^+,B^-) is a regular value of π. This proves the claim.□ By Lemma <ref> we can identify ∂23(,β)= (,θ)×(θ,β)×π(A^+,A^-), where (H,v,w) corresponds to (h(),-t-τ_a(),t-τ_a()) for (,t)∈∂23(,β). Hence, for generic matrices A^± the number of boundary points of 23(,β), counted modulo 2, is ',β. This completes the proof of Proposition <ref>. □ § TWO POINTS MOVING ON A CYLINDER, II Let Y be an oriented homology 3–sphere. In this section we will prove Proposition <ref>, which concerns a certain cochain map ψ:C^*(Y)→ C^*+5(Y) appearing in the proof of additivity of . We will continue using the notation introduced in Section <ref>. §.§ The cochain map ψ We begin by recalling the definition of ψ in degrees different from 4 mod 8 given in Subsection <ref>. Let s_1,s_2 be "generic" sections of the canonical 3–plane bundle →^*(Y[0]). (Later we will impose further conditions on s_1,s_2.) For any ,β∈^*(Y) set 33(,β):={(,t)∈× s_1([-t])=0=s_2([t])}. If (,β)=5 and ()≢48 then arguing as in Part (I) of the proof of Proposition <ref> one finds that 33(,β) is a finite set. We define the matrix coefficient ψ,β by ψ,β:=#33(,β). Recall that any "generic" section of defines a cup product C^*(Y)→ C^*+3(Y) by the formula eqn:v3def. Let v_3 and v'_3 be the cup products defined by s_1 and s_2, respectively. prop For q≢3,48 one has dψ+ψ d=v_3v'_3+v'_3v_3 as maps C^q(Y)→ C^q+6(Y). Let ,∈^*(Y) with (,)=6 and ()≢3,48. Note that no sequence in M(,) can have a chain-limit involving factorization through the trivial connection. Now let (_n,t_n) be a sequence in 33(,). After passing to a subsequence we may assume that description (i) The sequence ^*_t_n_n converges over compact subsets of to some point ^+∈ M(^+,^+). (ii) The sequence ^*_-t_n_n converges over compact subsets of to some point ^-∈ M(^-,^-). (iii) The sequence t_n converges in [-∞,∞] to some point t_∞. Clearly, s_1(^+[0]=0=s_2(^-[0]), hence (^±,^±)≥3. Case 1: t_∞ finite. Then (^+,^+)=5 and either ^+= or ^+=. The corresponding number of ends of 33(,), counted modulo 2, is (dψ+ψ d),. Case 2: t_∞=∞. Then (^±,^±)=3, so ^-=, ^-=^+, and ^+=. The corresponding number of ends of 33(,) is v_3v'_3, modulo 2. Case 3: t_∞=-∞. As in Case 2 one finds that the number of such ends is v'_3v_3, modulo 2. Since the total number of ends of 33(,) must be zero modulo 2, we obtain the proposition.□ We now show that v_3=v'_3 if the sections s_1,s_2 are close enough in a certain sense. To make this precise, we introduce the following terminology: We will say a section s of has Property 4 if for all ,β∈^*(Y) with (,β)≤4 the map s_β:M(,β)→, ↦ s([0]) is transverse to the zero-section in . Suppose s∈() has Property 4, and let be any finite-dimensional linear subspace of (). Then for any sufficiently small ∈ the following hold: description (i)The section s':=s+ has Property 4. (ii)The sections s and s' define the same cup product C^*(Y)→ C^*+3(Y). Let (,β)=3. Combining the transversality assumption with a compactness argument one finds that the zero-set Z of s_β is a finite set. Now observe that the map equation M(,β)×→, (,)↦(s+)([0]) is smooth, since has finite dimension. Therefore, given any neighbourhood U of Z in M(,β) then the zero-set of (s+)_β is contained in U for all sufficiently small . The lemma now follows by applying the implicit function theorem to the map eqn:sfrpmap.□ From now on we assume that s_1,s_2 are sufficiently close in the sense of the lemma, so that in particular v_3=v'_3. Since we are taking coefficients in /2, we deduce from Proposition <ref> that dψ=ψ d in degrees different from 3 and 4 modulo 8. We now extend the definition of ψ to degree 4. Let ,β∈^*(Y) with ()=4 and (β)=1. To define ψ,β we use the set-up of Subsections <ref> and <ref> and define ρ_j, s_j for j=1,2 as in Subsection <ref>, where A^± should now be generic 3×2 real matrices. In particular, we require that A^± should have non-zero columns and that the angle between the columns of A^+ should be different from the angle between the columns of A^-. For any 3×2 real matrix B with non-zero columns B_j set ν(B):= B_1,B_2/B_1B_2, using the standard scalar product and norm on ^3. Then the above assumption on the angles means that ν(A^+)≠ν(A^-). Now define 33(,β):={(,t)∈× s_1(,-t)=0, s_2(,t)=0}. prop 33(,β) is a finite set. It is easy to see that Lemmas <ref> and <ref> hold with 33(,β) in place of 23(,β). Arguing as in the proof of Proposition <ref> one finds that for any L>0 there are only finitely many points (,t)∈33(,β) with ()≤ L. Choose L≥2 such that for all (,t)∈33(,β) with ()≥ L one has |t|≤min(-τ^-(),τ^+())-1, which implies that s_j(,t)=ρ_j(,t). We claim that there are no such (,t). For suppose (,t) is such an element and set (H,v_1,v_2):=(h(),-t-τ_a(),t-τ_a())∈3××. Then for j=1,2 one has (1-b(v_j))HA^-_j+b(v_j)A^+_j=0. However, there is no solution (H,v_1,v_2) to these equations, since we assume the columns A^±_j are non-zero and ν(A^+)≠ν(A^-).□ We define ψ in degree 4 by ψ,β:=#33(,β). prop If the endomorphism ψ is defined in terms of “generic” sections s_1,s_2 that are sufficiently close then dψ=ψ d as maps C^*(Y)→ C^*+6(Y). Although we could deduce this from Proposition <ref> below, we prefer to give a direct proof, partly because the techniques involved are also needed in the proof of Proposition <ref>. It only remains to prove this in degrees 3 and 4 modulo 8. There is a complete symmetry between these two cases because of Lemma <ref>, so we will spell out the proof only in degree 4. Let ,∈^*(Y) with ()=4, ()=2. We will show that (dψ+ψ d),=0 by counting the ends of a certain 1–dimensional submanifold 33(,) of M(,)×. For any '∈(Y) we define a smooth function :M(',)→ as follows. For each β∈^1_Y let K_β be the union of all subsets R^+(M(”,))⊂^*(Y[0]) where β≠”∈(Y) and (”,)≤(β,), where ( · , · ) is as in eqn:cs-al-beta. Then K_β is compact. Choose a closed neighbourhood W_β in ^*(Y[0]) of the finite set R^+(M(β,)) such that W_β is disjoint from K_β, and a smooth function f_β:^*(Y[0])→[0,1] such that the following two conditions hold: * W_β and W_β' are disjoint if β≠β'; * f_β=1 on a neighbourhood of R^+(M(β,)), and f_β=0 outside W_β. Set f:=1-∑_β f_β. Let be the set of all β∈^1_Y such that (',)>(β,)>0. For ∈ M(',) and β∈ we define τ^+_β()∈ implicitly by _([τ^+_β()-2,∞))=(β,)+, where the constant is as in Subsection <ref>, and set ():=f(R^+())·τ^+()+ ∑_β f_β(R^+())·τ^+_β(). The function behaves under translation in the same way as τ^±. Namely, for any real number s one has (^*_s())=()-s. For any ∈ M(',) let () denote the restriction of to the band (). For i=1,2,3 let i be the holonomy invariant section of 'β whose value at (,()) is ρ_i(()). lemma Let _n be a chain-convergent sequence in M(',). If the last term of the chain-limit of _n lies in (β,) for some β∈^*(Y) of index 1 then (τ^+-)(_n)→∞, otherwise the sequence (τ^+-)(_n) is bounded. Because of the translationary invariance of τ^+- we may assume that τ^+(_n)=0. Then _n converges over compact subsets of to some element ∈ M(”,) representing the last term in the chain-limit of _n. In fact, because no energy can be lost at ∞ by the choice of , there are, for any real number r, connections A_n,A representing _n,, respectively, such that A_n-A_L^p,w_1((r,∞)× Y)→0, as follows from the exponential decay results of <cit.>. Here, p,w are as in the definition of the space of connections in Section <ref>. Suppose first that β:=” is irreducible of index 1. Then (_n)=τ^+_β(_n) for n≫0 and (τ^+-τ^+_β)(_n)=-τ^+_β(_n)→∞, proving the first assertion of the lemma. Now suppose the sequence (τ^+-)(_n) is not bounded. After passing to a subsequence we may assume that there exists a β∈ such that for each n one has R^+(_n)∈ W_β. Suppose, for contradiction, that ”≠β. Since W_β is closed we must have R^+()∈ W_β as well, hence (”,)>(β,). From eqn:anai we deduce that τ^+_β(_n)→τ^+_β(), so (-τ^+)(_n)=τ^+_β(_n) is bounded. This contradiction shows that ”=β.□ If _n is a sequence in M(',) which converges over compacta to ∈ M(”,), where ”∈(Y) and (”)≠1, then (_n)→(). Let β∈^1_Y with (β,)>0. If (”,)≤(β,) then R^+()∉W_β. Since W_β is closed, we have R^+(_n)∉W_β for n≫0. This means that β contributes neither to () nor to (_n) for n≫0. If on the other hand (”,)>(β,) then τ^+_β(_n)→τ^+_β(). From this the lemma follows.□ Let and be the real-valued functions on M(,) defined by :=1/2(+τ^-), :=1/2(-τ^-). Let :M(,)→[0,∞), ↦ e_(R^-())·(), where e_ is as in eqn:eal. As the following lemma shows, the quantity () measures the extent to which factors through the trivial connection θ over Y. lemma Let _n be a chain-convergent sequence in M(,). If the first term of the chain-limit of _n lies in (,θ) then (_n)→∞, otherwise the sequence (_n) is bounded. Because of the translationary invariance of we may assume τ^-(_n)=0 for all n, so that the sequence _n converges over compact subsets of to some ∈ M(,β), where β∈(Y). Then represents the first term of the chain-limit of _n. Part I. Suppose first that β=θ. We will show that (_n)→∞. There are two sequences 1,2 of real numbers such that itemize * ^*_1(_n) converges over compact subsets of to an element of M(,θ). * ^*_2(_n) converges over compact subsets of to an element of M(θ,β'), where β' is an element of ^*(Y) which is either equal to or has index 1. * 2-1→∞. Define the sequence r_n of real numbers implictly by __n((-∞,r_n])=(,θ)+. Then r_n<τ^+(_n) and r_n<τ^+_β(_n) for all β∈_, hence r_n<(_n). For large n one therefore has (_n)=(_n)-τ^-(_n)>r_n-τ^-(_n). But 1-τ^-(_n), 2-r_n are both bounded sequences and 2-1→∞, hence (_n)>r_n-τ^-(_n)→∞. Part II. Now suppose β is irreducible. We will show that the sequence (_n) is bounded. Case 1: β=. Then _n converges to in M(,), hence (_n) is bounded. Case 2: (,β)≤4. For large n one would then have R^-(_n)∉U_, hence e_(R^-(_n))=0 and therefore (_n)=0. Case 3: (,β)=5, i.e. (β)=1. For large n one would then have R^+(_n)∈ W_β and therefore (_n)=e_(_n[0])·τ^+_β(_n) → e_([0])·τ^+(), so that (_n) is bounded in this case, too.□ Given '∈(Y), a real number d, and a real 3×2 matrix A'=(a'_ij) of maximal rank we define two sections ζ_1,ζ_2 of ' by ζ_j(,t):=b^+ j+(1-b^+)∑_i=1^3a'_ijρ^+_i, where b^+:=b(τ^+--d). Here, and in the remainder of this section, b:→ is a smooth function satisfying eqn:b-prop1 and eqn:b-prop2. We will show that for '= and generic matrix A' the sections ζ_1,ζ_2 are linearly independent at any point (,t)∈ M(,)× with ()≫0. We begin by spelling out sufficient conditions on A' under which this holds. For any β∈^1_Y the finite set (θ,β)×(β,) is in 1-1 correspondence with the set of points (,')∈ M(θ,β)× M(β,) satisfying τ^+()=0=τ^+('). (In other words, this is one way of fixing translation.) For each such pair (,'), represented by a pair (A,A') of connections, say, the holonomy of A along the path [0,∞)×{y_0} composed with the holonomy of A' along (-∞,0]×{y_0} defines an isomorphism _,':_[0]→_'[0]. For any real number r and j=1,2 let η_j(r)=r·_,'(ρ_j([0]))+ (1-r)∑_i=1^3a'_ijρ_i('[0]). Then the set C:={r∈[0,1]η_1(r)∧η_2(r)=0} has expected dimension 1-2=-1 and is empty for generic matrices A'. Since (Y) is finite we conclude that for generic A', the set C is empty for any β∈^1_Y and any (,')∈ M(θ,β)× M(β,) satisfying ttom. From now on we assume A' is chosen so that this holds. lemma Let A' be as described above. If d>0 is sufficiently large then the sections ζ_1,ζ_2 are linearly independent at every point in M(θ,)×. If the lemma were false then we could find a sequence d_n of real numbers converging to ∞ and for each n an element _n∈ M(θ,) such that ζ_1,ζ_2, defined with d_n in place of d, are linearly dependent at (_n,t) for some (hence any) t. Because A' has maximal rank and the assumptions on ensure that ρ_1,ρ_2,ρ_3 are linearly independent at R^+(_n), we must have b^+(_n)>0, i.e. (τ^+-)(_n)>d_n-1, which shows that (τ^+-)(_n)→∞. After passing to a subsequence we can assume that the sequence _n is chain-convergent and that b^+(_n) converges to some r∈[0,1]. By Lemma <ref> the chain-limit lies in (θ,β)×(β,) for some β∈^1_Y. Then the sequences ^*_τ^+(_n)(_n), ^*_τ^+_β(_n)(_n) converge over compact subsets of to some ∈ M(θ,β) and '∈ M(β,), respectively, and ttom holds. But then η_1(r) and η_2(r) are linearly dependent, contradicting the assumption on A'.□ From now on we assume that d is chosen so that the conclusion of Lemma <ref> holds. lemma There is a constant T_1<∞ such that the sections ζ_1,ζ_2 are linearly independent at every point (,t)∈ M(,)× with ()>T_1. Recall that if ζ_1,ζ_2 are linearly independent at (,t) for some real number t then the same holds at (,t') for all t'. Now suppose the lemma were false. Then we could find a sequence in M(,) such that ()→∞ and ζ_1(,t),ζ_2(,t) are linearly dependent for every n. We may also arrange that τ^+(_n)=0. After passing to a subsequence we may assume that is chain-convergent. From Lemma <ref> we see that there are two possibilities for the chain-limit. Case 1: The chain-limit of _n lies in (,θ)×(θ,β)×(β,) for some β∈^1_Y. Then (_n)=τ^+_β(_n) for n≫0. Let ∈ M(θ,β) be a representative for the middle term of the chain-limit. By Lemma <ref> we have (τ^+-)(_n)→∞, so for t_n:=() one has ζ_j(_n,t_n)→ρ_j(R^+()), contradicting the fact that the ρ_j are linearly independent at R^+(). Case 2: The chain-limit of _n lies in (,θ)×(θ,). Then _n converges over compact subsets of to some ∈ M(θ,) satisfying τ^+()=0. According to Lemma <ref> we have (_n)→(), so ζ_j(_n,t)→ζ_j(,t) for any t. Hence, ζ_1,ζ_2 must be linearly dependent at (,t). But d was chosen so that the conclusion of Lemma <ref> holds, so we have a contradiction.□ At any point (,t)∈ M(',)× where ζ_1,ζ_2 are linearly independent let ξ_1(,t),ξ_2(,t) be the orthonormal pair of vectors in _[t] obtained by applying the Gram-Schmidt process to ζ_1(,t) and ζ_2(,t), and let ξ_3=ξ_1×ξ_2 be the fibrewise cross-product of ξ_1 and ξ_2. Then {ξ_j(,t)}_j=1,2,3 is a positive orthonormal basis for _[t]. We now have the necessary ingredients to define the cut-down moduli space 33(,). Set c:M(,)×→[0,1], (,t)↦ b(t-()) and for j=1,2,3 define a section _j of the bundle _ over M(,)× by _j:=(1-c)∑_ia^-_ijρ^-_i+c∑_ia^+_ijξ_i. Choose a constant T_1 for which the conclusion of Lemma <ref> holds and define a function g:M(,)×→[0,1] by g(,t):=b(()-T_1)· b(()-t)· b(t-τ^-()). For j=1,2,3 we now define a section s_j of _ by s_j(,t):=(1-g(,t))· s_j([t])+g(,t)·_j(,t). Now set 33(,):={(,t)∈× s_1(,-t)=0, s_2(,t)=0}. In the study of the ends of 33(,) we will encounter certain subspaces of M(θ,) which we now define. For ∈ M(θ,) and j=1,2 set s_j():=(1-b(()))· s_j([0]) +b(())∑_i=1^3a^+_ijξ_i(,0) and define M_3;j(θ,):={∈ M(θ,) s_j()=0}. This space has expected dimension 2-3=-1 and is empty for “generic” choices of sections s_j and matrix A^+. There is a constant C_0<∞ such that for all (,t)∈33(,) one has |t|≤min(-τ^-(),())+C_0. That |t|+τ^-() is uniformly bounded above for (,t)∈33(,) is proved in the same way as the corresponding part of Lemma <ref>. To prove the same for |t|-(), suppose there were a sequence (_n,t_n)∈33(,) with |t_n|-(_n)→∞. After passing to a subsequence we may assume the following. * The sequence _n is chain-convergent; * There is a constant e=±1 such that |t_n|=et_n for all n; * The sequence et_n-τ^+(_n) converges in [-∞,∞] to some point t. Let j:=1/2(3+e). Then for n≫0 we have 0= s_j(_n,et_n)=s_j(_n[et_n]). According to Lemma <ref> one of the following two cases must occur. Case 1: The sequence (τ^+-)(_n) is bounded. Then et_n-τ^+(_n)→∞, so _n[et_n]→. By continuity of s_j we must have s_j()=0, which however will not hold for a “generic” section s_j. Case 2: (τ^+-)(_n)→∞. From Lemma <ref> we deduce that ^*_τ^+(_n)(_n) converges over compact subsets of to some ∈ M(β,), where β∈^1_Y. Then (_n)=τ^+_β(_n) for n≫0. Furthermore, ^*_τ^+_β(_n) converges over compacta to an element of some moduli space M(',β), where β≠'∈(Y). Case 2a: t=±∞. Then the exponential decay results of <cit.> imply that _n[et_n] converges to (if t=-∞) or to (if t=∞). This is ruled out in the same way as Case 1. Case 2b: t finite. Then ^*_et_n(_n) converges over compacta to ':=^*_t()∈ M(β,), and _n[et_n]→'[0]. But then s_j('[0])=0, which will not hold for a “generic” section s_j of the bundle , since M(β,) has dimension 1 whereas has rank 3.□ For any constant C_1<∞ there is constant L>0 such that for all (,t)∈33(,) satisfying ()≥ L one has |t|≤min(-τ^-(),())-C_1. If not, then there would be a constant C_1<∞ and a sequence (_n,t_n)∈33(,) with (_n)→∞ such that either (i) |t_n|>-τ^-(_n)-C_1 for all n, or (ii) |t_n|>(_n)-C_1 for all n. Case (i) is rule out as in the proof of Lemma <ref>. Now suppose (ii) holds. Because (_n)→∞ we have (_n)→∞. From Lemma <ref> we deduce that |t_n|-(_n) is bounded, so |t_n|-τ^-(_n)→∞. This implies that c(_n,t_n)=1 for n≫0. After passing to a subsequence we may assume that the sequence _n chain-converges and |t_n|=-et_n for some constant e=±1. Case 1: (τ^+-)(_n) is bounded. By Lemmas <ref> and <ref> the chain-limit of _n must lie in (,θ)×(θ,), so after passing to a subsequence we may assume that '_n:=^*_et_n(_n) converges over compacta to some ∈ M(θ,). Using Lemma <ref> we obtain g(_n,et_n)=b((_n)-et_n)=b(('_n))→ b(()). Let j:=1/2(3+e). Then 0= s_j(_n,et_n)→ s_j(). But then lies in M_3;j(θ,), which is empty by choice of the matrix A^+. Case 2: (τ^+-)(_n)→∞. Then the chain-limit of _n lies in (,θ)×(θ,β)×(β,) for some β∈^1_Y. For large n we now have (_n)=τ^+_β(_n) and ξ_j(_n,et_n)= j(_n,et_n), j=1,2. After passing to a subsequence we may assume that '_n:=^*_et_n(_n) converges over compacta to some ∈ M(θ,β). For large n we have g(_n,et_n)=b(τ^+_β(_n)-et_n)=b(τ^+_β('_n))→ b(τ^+()). Let j:=1/2(3+e). Then 0= s_j(_n,et_n)→(1-b(τ^+()))· s_j([0]) +b(τ^+())∑_ia^+_ijρ^+_i(,0). Thus, lies in M_3(θ,β), which is empty by choice of A^+. □ There is a constant L<∞ such that for all (,t)∈33(,) one has ()<L. For any (,t)∈33(,) with ()>T_1 let h()∈3 be the matrix whose coefficients h_ij() are given by ρ^-_j(,t)=∑_ih_ij()ξ_i(,t). By Lemma <ref> there is an L≥ T_1+1 such that for all (,t)∈33(,) with ()≥ L one has |t|≤min(-τ^-(),())-1, which implies that s_j(,t)=_j(,t). Given such a (,t), the triple (H,v_1,v_2):=(h(),-t-τ_a(),t-τ_a())∈3×× satisfies the equation (1-b(v_j))HA^-_j+b(v_j)A^+_j=0. for j=1,2. However, as observed in the proof of Proposition <ref>, these equations have no solution for generic matrices A^±.□ We will now prove Proposition <ref> in degree 4 by counting the number of ends of 33(,) modulo 2. Ends of 33(,): Let (_n,t_n) be a sequence in 33(,). After passing to a subsequence we may assume that the following hold: (i) The sequences ^*_-t_n(_n) and ^*_t_n(_n) converge over compact subsets of . (ii) The sequence ^*_τ^-(_n)(_n) converges over compacta to some ∈ M(,β), where β∈(Y). (iii) The sequences t_n and τ^-(_n) converge in [-∞,∞]. Suppose (_n,t_n) does not converge in 33(,). Case 1: β=. We show this cannot happen. First observe that the sequence (_n) converges in . Since Lemma <ref> provides an upper bound on τ^-(_n) and a lower bound on (_n) it follows that both sequences must be bounded. Applying the same lemma again we see that |t_n| is bounded. But then assumptions (ii) and (iii) imply that (_n,t_n) converges in 33(,), which we assumed was not the case. Case 2: β irreducible, M(,β)≤4. Then (_n)=0 for n≫0. As in the proof of Proposition <ref> we find that the corresponding number of ends of 33(,) is ψ d,. Case 3: β irreducible, M(,β)=5. Then (_n)=τ^+_β(_n) for n≫0, and (_n)→τ_d(). As in Case 1 we see that the sequences τ^-(_n) and t_n must be bounded, hence they both converge in by assumption (iii). From (ii) we deduce that _n converges over compacta to some '∈ M(,β) (related to by a translation). By Lemma <ref> we have ξ_j(_n,t)= j(_n,t) for n≫0 and any t, so _j(_n,t)→_j(',t). Setting t':=lim t_n we conclude that (',t')∈33(,β). The corresponding number of ends of 33(,) is dψ,.□ §.§ Calculation of ψ There are constants ^±∈/2 independent of Y and satisfying ^++^-=1 such that if ψ is defined in terms of “generic” sections s_1,s_2 that are sufficiently close and e is the sign of ν(A^+)-ν(A^-) then there is a homomorphism Ξ:C^*(Y)→ C^*+4(Y) such that ψ=v_3v_2+^e'+dΞ+Ξ d. To be precise, if s'∈() satisfies Property 4 and ⊂() is any sufficiently large finite-dimensional linear subspace then for any sufficiently small generic (_0,_1)∈× the conclusion of the proposition holds with s_j=s'+_j. The above proposition completes the proof of Proposition <ref> except for the order of v_2,v_3, which is insignificant in vue of Proposition <ref>. (The order could be reversed by a small change in the proof given below.) Let ,β∈^*(Y) with (,β)=5. Part (I) Suppose ()≢48. For -3≤ y≤3 we define a section χ_y of by 6χ_y:=(3-y)s_1+(3+y)s_2. In particular, χ_-3=s_1, χ_3=s_2. Let :={z∈:|(z)|≤3, |z|≥1} and let ':=/±1 be the surface-with-boundary obtained by identifying each z∈ with -z. The image of a point z∈ in ' will be denoted by [z]. Let ξ̅∈(), and let ξ̂ be a section of the bundle × S^1 over ^*(Y[0])× S^1 satisfying ξ̂(,-z)=-ξ̂(,z), so that ξ̂∈_a() in the notation of Section <ref>. We then define a section ξ of the bundle × over ^*(Y[0])× as follows. Let b_1(z):=b(|z|-2). For ∈^*(Y[0]) and z=(x,y)∈ let ξ(,z):=(1-b_1(z))·(ξ̅()+ξ̂(,z/|z|)) +b_1(z)χ_y(). Let f:→ be the smooth function given by f(z):=b_1(z)(z). Note that f(z)=(z) for |z|≥3, and f(z)=0 for |z|=1. Moreover, f(-z)=-z. (i) Let =(,β) be the subspace of M(,β)×' consisting of those points (,[z]) such that ξ([f(z)],z)=0, ξ([f(-z)],-z)=0. (ii) Let =(,β) be the subspace of M(,β)× S^1×[0,∞) consisting of those points (,z^2,r) such that z∈ S^1 and ξ̂([-r],z)=0, ξ̅([r])=0. If ξ̅ is “generic” and ξ̂ is given by a “generic” section of ⊗ (see Lemma <ref>) then will be a smooth 1–manifold-with-boundary. Now choose a section s'∈() satisfying Property 4. If is a sufficiently large finite-dimensional linear subspace of () and (_0,_1) a generic element of × then taking s_j=s'+_j, j=1,2 the space will be a smooth 1–manifold-with-boundary. (The reason that transversality can be achieved over the boundary component of M(,β)×' given by |z|=1 is essentially that if V is any real vector space then every element of V× V can be written as (a+b,a-b) for suitable a,b∈ V.) If in addition _0,_1 are sufficiently small then for -3≤ y≤3 the section χ_y will satisfy Property 4 and define the same cup product v_3:C^*(Y)→ C^*+3(Y) as s', by Lemma <ref>. The part of the boundary of given by |z|=1 can be identified with the boundary of (defined by r=0). To see this, let (,z)∈ M(,β)× and set _0:=[0]. Then (,[z])∈ if and only if ξ̅(_0)+ξ̂(_0,z)=0=ξ̅(_0)-ξ̂(_0,z), which in turn is equivalent to (,z^2,0)∈. This allows us to define a topological 1–manifold-with-boundary =(,β) as a quotient of the disjoint union ∐ by identifying each boundary point of with the corresponding boundary point of . The proposition will be proved by counting the ends and boundary points of modulo 2. Before doing this, we pause to define the homomorphism Ξ. Let ',β'∈^*(Y) with (',β')=4. Replacing (,β) by (',β') in Definition <ref> yields zero-dimensional manifolds _j(',β'), j=1,2. The argument that we will give below to determine the ends of _j(,β) can also be applied to show that _j(',β') is compact. Granted this, we define Ξ:=Ξ_1+Ξ_2, where Ξ_j has matrix coefficient Ξ_j',β':=#_j(',β'). Ends of (,β): Let (_n,[z_n]) be a sequence in (,β), where z_n=(x_n,y_n)∈^2. After passing to a subsequence we may assume that description (i) The sequence ^*_-x_n(_n) converges over compact subsets of to some ^-∈ M(^-,β^-). (ii) The sequence ^*_x_n(_n) converges over compact subsets of to some ^+∈ M(^+,β^+). (iii) The sequence (x_n,y_n) converges in [-∞,∞]×[-3,3] to some point (x,y). Suppose (_n,[z_n]) does not converge in (,β). Case 1: x finite. Then (^+,β^+)=4 and either ^+= or β^+=β. The corresponding number of ends of (,β) is (dΞ_1+Ξ_1d),β modulo 2. Case 2: x=±∞. Then for n≫0 one has 0=ξ([± x_n],± z_n)→χ_± y(^±[0]). Hence χ_± y(^±[0])=0. Since χ_± y satisfy Property 4 we must have (^±,β^±)≥3, so 5=(,β)≥(^-,β^-)+(^+,β^+)≥6. This contradiction shows that there are no ends in the case x=±∞. Ends of (,β): We argue as in part (I) of the proof of Proposition <ref>. Let (_n,z_n^2,r_n) be a sequence in (,β). After passing to a subsequence we may assume that r_n converges in [0,∞] to some point r. Then the number of ends modulo 2 corresponding to r<∞ is (dΞ_2+Ξ_2d),β. Using Proposition <ref> and Lemma <ref> we see that the number of ends corresponding to r=∞ is v_3v_2,β. Boundary points of (,β): These are the points (,[z]) in M(,β)×' where (z)=3 and 0=ξ([x],z)=s_2([x]), 0=ξ([-x],-z)=s_1([-x]). The number of such points is by definition ψ,β. Since the number of ends plus the number of boundary point of must be zero modulo 2 we obtain the equation eqn:psi-v3v2 in the case ()≢42. Part (II) Suppose ()≡48. We define maps V^±:[-3,3]→^3 by 6V^±(y):=(3-y)A^±_1 +(3+y)A^±_2. Choose generic elements L̅^±∈^3 and functions L̂^±:S^1→^3 satisfying L̂^±(-z)=-L̂^±(z) for z∈ S^1. We define maps L^±:→^3 by L^±(z):=(1-b_1(z))· (L̅^±+L̂^±(z/|z|))+ b_1(z)· V^±((z)), where the function b_1 is as in eqn:b1def. Let (,β) be the vector bundle over × obtained by pulling back the bundle →^*(Y[0]) by the map ×→^*(Y[0]), (,z)↦[f(z)]. Let c and g be the functions defined in eqn:c23def and eqn:gomt, respectively. We define sections ,s of (,β) by (,z):=(1-c(,f(z)))∑_i=1^3L^-_i(z)ρ^-_i(,f(z)) +c(,f(z))∑_i=1^3L^+_i(z)ρ^+_i(,f(z)), s(,z):=(1-g(,f(z)))·ξ([f(z)],z)+g(,f(z))·(,z). Let =(,β) be the subspace of ×' consisting of those points (,[z]) such that s(,z)=0, s(,-z)=0. We define sections ,s̅ of the bundle (,β) over × by (,r):=(1-c(,r))∑_i=1^3L̅^-_iρ^-_i(,r) +c(,r)∑_i=1^3L̅^+_iρ^+_i(,r), s̅(,r):=(1-g(,r))·ξ̅([r])+g(,r)·(,r). Let (,β) be the vector bundle over × S^1× obtained by pulling back the bundle by the map × S^1×→ Y[0], (,z,r)↦[r]. We define sections ,ŝ of (,β) by (,z,r):=(1-c(,r))∑_i=1^3L̂^-_i(z)ρ^-_i(,r) +c(,r)∑_i=1^3L̂^+_i(z)ρ^+_i(,r), ŝ(,z,r):=(1-g(,r))·ξ̂([r],z)+g(,r)·(,z). Note that ŝ(,-z,r)=-ŝ(,z,r). Let =(,β) be the subspace of × S^1×[0,∞) consisting of those points (,z^2,r) such that z∈ S^1 and ŝ(,z,-r)=0, s̅(,r)=0. By inspection of the formulas involved one finds that for |z|=1 one has (,0)+(,z,0) =(,z), s̅(,0)+ŝ(,z,0) =s(,z). Therefore, the part of the boundary of given by |z|=1 can be identified with the boundary of (defined by r=0). By gluing and correspondingly we obtain a topological 1–manifold-with-boundary . There is a constant C_0<∞ such that for all (,[z])∈ one has |f(z)|≤min(-τ^-(),τ^+())+C_0. The proof is similar to that of Lemma <ref>. We must provide upper bounds on both quantities |f(z)|+τ^-() and |f(z)|-τ^+() for (,[z])∈. The proof is essentially the same in both cases, so we will only spell it out in the second case. Suppose, for contradiction, that (_n,[z_n]) is a sequence in with |f(z)|-τ^+(_n)→∞. By perhaps replacing z_n by -z_n we can arrange that (z_n)≥0. Then f(z_n)≥0 as well, and g(_n,f(z_n))=0 for n≫0. Let z_n=(x_n,y_n). After passing to a subsequence we may assume that z_n converges in [0,∞]×[-3,3] to some point (x,y). Case 1: x finite. Let z:=(x,y)∈. The sequence _n converges to over compact subsets of , so for large n we have 0=ξ(_n[f(z_n)],z_n)→ξ(,z). However, the space of all w∈ for which ξ(,w)=0 has expected dimension 2-3=-1, so this space is empty for “generic” sections s_1,s_2,ξ̅,ξ̂. Hence, x cannot be finite. Case 2: x=∞. Then f(z_n)=x_n for large n. Now, ^*_x_n_n converges over compacta to , so for large n we have 0=ξ(_n[x_n],z_n)=χ_y_n(_n[x_n])→χ_y(). However, the space of all t∈[-3,3] for which χ_t()=0 has expected dimension 1-3=-2, so this space is empty for “generic” sections s_1,s_2. Hence, x≠∞. This contradiction proves the lemma.□ In the proof of Lemma <ref> below we will encounter certain limits associated to sequences in with chain-limits in (,θ)×(θ,β). These limits lie in cut down moduli spaces analogous to those introduced in Definitions <ref> and <ref>, with M(,θ) or M(θ,β) in place of . We now define these cut-down spaces in the case of M(θ,β) and observe that they are “generically” empty. The case of M(,θ) is similar. For any (,z)∈× let s(,z):= (1-b(τ^+()-f(z)))·ξ([f(z)],z) +b(τ^+()-f(z))∑_i=1^3L^+_i(z)ρ^+_i(,f(z)). Let (θ,β) be the subspace of M(θ,β)×' consisting of those points (,[z]) such that s(,z)=0, s(,-z)=0. Then (θ,β) has expected dimension 3-6=-3 and is empty for “generic” sections s_1,s_2,ξ̅,ξ̂ and generic choices of A^+,L̅^+,L̂^+. Let (θ,β) be the subspace of M(θ,β)×[-3,3] consisting of those points (,y) such that (1-b(τ^+()))·χ_y([0]) +b(τ^+())∑_iV^+_i(y)ρ^+_i(,0)=0. We observe that the space (θ,β) (a parametrized version of the space M_3(θ,β) defined in Subsection <ref>) has expected dimension 2-3=-1 and is empty for “generic” sections s_1,s_2 and generic matrix A^+. For any constant C_1<∞ there is constant L>0 such that for all (,[z])∈ satisfying ()≥ L one has |f(z)|≤min(-τ^-(),τ^+())-C_1. The proof is similar to that of Lemma <ref>. If the lemma did not hold there would be a sequence (_n,[z_n]) in such that (_n)→∞ and one of the following two conditions hold: (i) |f(z_n)|>-τ^-(_n)-C_1 for all n, (ii) |f(z_n)|>τ^+(_n)-C_1 for all n. Suppose (ii) holds, the other case being similar. By replacing z_n by -z_n, if necessary, we can arrange that (z_n)≥0. From Lemma <ref> we deduce that the sequence f(z_n)-τ^+(_n) is bounded, whereas f(z_n)-τ^-(_n)→∞. For large n we therefore have c(_n,f(z_n))=1, g(_n,f(z_n))=b(τ^+(_n)-f(z_n)). Let z_n=(x_n,y_n). After passing to a subsequence we may assume that * '_n:=^*_x_n_n converges over compact subsets of to some '∈ M(θ,β); * z_n converges in [0,∞]×[-3,3] to some point z=(x,y). Case 1: x finite. Then _n converges over compacta to some ∈, and 0=s(_n,z_n)→ s(,z). Beause the sequence z_n is bounded, we also have c(_n,f(-z_n))=1 for large n, so 0=s(_n,-z_n)→ s(,-z). But then (,[z]) belongs to (θ,β), contradicting the fact that that space is empty. Case 2: x=∞. Since τ^+('_n)=τ^+(_n)-x_n, we obtain g(_n,f(z_n))=b(τ^+('_n)) for n≫0. Therefore, 0=s(_n,z_n)→ (1-b(τ^+(')))·χ_y('[0]) +b(τ^+('))∑_iV^+_i(y)ρ^+_i(',0). But this means that (',y) belongs to (θ,β), which is empty. This contradiction proves the lemma.□ There is a constant C_0<∞ such that for all (,z^2,r)∈ one has r≤min(-τ^-(),τ^+())+C_0. This is similar to the proof of Lemma <ref>.□ For any constant C_1<∞ there is constant L>0 such that for all (,z^2,r)∈ satisfying ()≥ L one has r≤min(-τ^-(),τ^+())-C_1. This is similar to the proof of Lemma <ref>.□ Choose L≥2 such that the conclusions of Lemmas <ref> and <ref> hold with C_1=1. For all (,[z]∈ with ()≥ L we then have s(,z)=(,z), and for all (,z^2,r)∈ with ()≥ L we have ŝ(,z,-r)=(,z,-r), s̅(,r)=(,r). From Lemma <ref> it follows that L is a regular value of the real functions on and defined by . Therefore, :={(,[z])∈()≤ L}, :={(,z^2,r)∈()≤ L} are smooth 1–manifolds-with-boundary, and ^L:=∪ is a topological 1–manifolds-with-boundary. (As before we identify the part of given by |z|=1 with the part of given by r=0.) Ends of ^L: From Lemma <ref> we deduce that every sequence (_n,[z_n]) in which satisfies (_n)>0 has a convergent subsequence. Similarly, it follows from Lemma <ref> that every sequence (_n,z_n^2,r_n) in with (_n)>0 has a convergent subsequence. (See the proof of Proposition <ref>, “Ends of 23(,β)”, Case 2.) Therefore, all ends of ^L are associated with sequences on which =0. The number of such ends, counted modulo 2, is given by the same formula as in Part (I), namely (v_3v_2+dΞ+Ξ d),β. Boundary points of ^L: The boundary of ^L decomposes as ^L=∪'∪, where and are the parts of the boundaries of and , respectively, given by ()=L, and ' is the part of the boundary of given by (z)=±3. By choice of matrices A^± there are no points (,t)∈33(,β) with ()≥ L, hence W'_=33(,β) and #'=ψ,β. By Lemma <ref> we can identify =(,θ)×(θ,β)×, =(,θ)×(θ,β)×, where is the set of points (H,τ,[z]) in 3××' satisfying (1-b(f(z)-τ))HL^-(z)+b(f(z)-τ)L^+(z), (1-b(f(-z)-τ))HL^-(-z)+b(f(-z)-τ)L^+(-z), whereas is the set of points (H,τ,z^2,r) in 3×× S^1×[0,∞) satisfying (1-b(-r-τ))HL̂^-(z)+b(-r-τ)L̂^+(z)=0, (1-b(r-τ))HL̅^-+b(r-τ)L̅^+=0. Here, (H,τ) corresponds to (h(),τ_a()). It follows from these descriptions that #(∪)=',β, where =#(∪)∈/2 is independent of the manifold Y. To prove the theorem it only remains to understand the dependence of on the pair of matrices A=(A^+,A^-). To emphasize the dependence on A we write =(A) and =(A). The space is independent of A. The part of corresponding to |z|=1 is also independent of A and is empty for generic L̅,L̂ for dimensional reasons. Let P denote the space of all pairs (B^+,B^-) of 3×2 real matrices with non-zero columns B^±_j. Let P^±:={(B^+,B^-)∈ P±(ν(B^+)-ν(B^-))>0}, where ν is as in eqn:nuB. Note that each of P^+,P^- is homotopy equivalent to S^2× S^2 and therefore path connected. For any smooth path C:[0,1]→ P we define :=⋃_0≤ t≤1(C(t))×{t}⊂3××'×[0,1]. As observed above there are no points (H,τ,[z],t) in with |z|=1. Since b_1(z)>0 for |z|>1 we can therefore make regular (i.e. transversely cut out) by varying C alone. If is regular then it is a compact 1–manifold-with-boundary, and =(C(0))∪(C(1))∪ X_C, where X_C is the set of points (H,τ,x,t) in 3×××[0,1] satisfying the two equations (1-b(x-τ))HC^-_1(t)+b(x-τ)C^+_1(t)=0, (1-b(-x-τ))HC^-_2(t)+b(-x-τ)C^+_2(t)=0. It follows that (C(0))+(C(1))=#X_C. If A,B∈ P^+ then we can find a path C:[0,1]→ P^+ from A to B. Then X_C is empty. By perturbing C(t) for 0<t<1 we can arrange that is regular. This yields (A)=(B). The same holds if A,B∈ P^-. Let ^± be the value that takes on P^±. To compute ^++^-, let (e_1,e_2,e_3) be the standard basis for ^3 and define C:[0,1]→ P by -C^+_1(t) =C^-_1(t):=e_1, -C^+_2(t) :=(1-t)e_1+te_2, C^-_2(t) :=(1-t)e_2+te_1. Then C(0)∈ P^+ and C(1)∈ P^-. Moreover, X_C consists of the single point (I,0,0,1/2), and this point is regular. (Here I is the identity matrix.) If we perturb C a little in order to make regular then X_C will still consist of a single, regular point. We conclude that ^++^-=#X_C=1. This completes the proof of the proposition.□ § INSTANTONS REDUCIBLE OVER OPEN SUBSETS The following proposition is implicit in <cit.> but we include a proof for completeness. Let X be an oriented connected Riemannian 4–manifold and E→ X an oriented Euclidean 3–plane bundle. Suppose A is a non-flat ASD connection in E which restricts to a reducible connection over some non-empty open set in X. Then there exists a rank 1 subbundle of E which is preserved by A. This is a simple consequence of the unique continuation argument in the proof of <cit.>. The proof has two parts: local existence and local uniqueness. (i) Local existence. By unique continuation, every point in X has a connected open neighbourhood V such that A|_V is reducible, i.e. there exists a non-trivial automorphism u of E|_V such that ∇_Au=0. The 1–eigenspace of u is then a line bundle preserved by A. (ii) Local uniqueness. Because A is not flat, it follows from unique continuation that the set of points in X where F_A=0 has empty interior. Now let V be any non-empty connected open set in X and suppose A preserves a rank 1 subbundle ⊂ E|_V. We show that is uniquely determined. Let x∈ V be a point where F_A≠0. By the holonomy description of curvature (see <cit.>) we can find a loop in V based at x such that the holonomy _(A) of A along is close to but different from the identity. The 1–eigenspace of _(A) is then 1–dimensional and must agree with the fibre _x. If x' is an arbitrary point in V then there is a similar description of _x' in terms of the holonomy of A along a loop obtained by conjugating with a path in V from x to x'. □ § UNIQUE CONTINUATION ON A CYLINDER As in Subsection <ref> let Y be a closed oriented connected 3-manifold and P→ Y an 3 bundle. If Y is not an integral homology sphere then we assume P is admissible. Let J⊂ be an open interval. We consider the perturbed ASD equation for connections in the bundle J× P→ J× Y obtained by adding a holonomy perturbation to the Chern-Simons function. For a connection A in temporal gauge the equation takes the form A_t/ t=-*F(A_t)+V(A_t), where A_t is the restriction of A to the slice {t}× P and V is the formal gradient of the perturbation. The following proposition is probably well known among experts, but we include a proof for completeness. Suppose A,A' are perturbed ASD connections in the bundle J× P→ J× Y. If A and A' are in temporal gauge and A_T=A'_T for some T∈ J, then A=A'. We will apply (an adaption of) the abstract unique continuation theorem in <cit.>. To this end, fix an arbitrary connection B in P and let c_t=A_t-A'_t, a_t=A_t-B, a'_t=A'_t-B. We have F(A_t)=F(B)+d_Ba_t+a_t∧ a_t and similarly for A'_t, so c_t/ t+*d_Bc_t=-*(a_t∧ c_t+c_t∧ a'_t) +V(A_t)-V(A'_t). By <cit.> we have V(A_t)-V(A'_t)_L^2≤c_t_L^2, hence c_t/ t+*d_Bc_t_L^2≤ϕ(t)c_t_L^2 where ϕ(t)=(a_t_∞+a'_t_∞+1). Because *d_B is a formally self-adjoint operator on 1–forms on Y and ϕ is locally square integrable (in fact, continuous), we deduce from <cit.> that for any compact subinterval [t_0,t_1] of J there are constants C_0,C_1 such that for t_0≤ t≤ t_1 one has c_t_L^2≥c_t_0_L^2·exp(C_0t+C_1). (<cit.> considers the case when c_t is defined for 0≤ t<∞, but the approach works equally well in our case.) Taking t_1=T we obtain c_t=0 for t<T. Replacing c_t by c_-t we get c_t=0 for t>T as well.□ 10 AS1 M. F. Atiyah and I. M. Singer. The index of elliptic operators: I. Ann. of Math., 87:484–530, 1968. BD1 P. J. Braam and S. K. Donaldson. Floer's work on instanton homology, knots and surgery. In H. Hofer, C. H. Taubes, A. Weinstein, and E. Zehnder, editors, The Floer Memorial Volume, pages 195–256. Birkhäuser, 1995. DHST1 I. Dai, J. Hom, M. Stoffregen, and L. Truong. An infinite-rank summand of the homology cobordism group. arXiv:1810.06145. D1 S. K. Donaldson. An application of gauge theory to four dimensional topology. J. Diff. Geom., 18:279–315, 1983. D2 S. K. Donaldson. The orientation of Yang–Mills moduli spaces and 4–manifold topology. J. Diff. Geom., 26:397–428, 1987. D5 S. K. Donaldson. Floer Homology Groups in Yang–Mills Theory. Cambridge University Press, 2002. DK S. K. Donaldson and P. B. Kronheimer. The Geometry of Four-Manifolds. Oxford University Press, 1990. Miller-Eismeier1 M. Miller Eismeier. Equivariant instanton homology. arXiv:1907.01091. FS2 R. Fintushel and R. J. Stern. Definite 4–manifolds. J. Diff. Geom., 28:133–141, 1988. F1 A. Floer. An instanton invariant for 3–manifolds. Comm. Math. Phys., 118:215–240, 1988. Fr0 K. A. Frøyshov. On Floer homology and 4–manifolds with boundary, 1995. D.Phil. thesis, University of Oxford. Fr1 K. A. Frøyshov. The Seiberg–Witten equations and four-manifolds with boundary. Math. Res. Lett., 3:373–390, 1996. Fr3 K. A. Frøyshov. Equivariant aspects of Yang–Mills Floer theory. Topology, 41:525–552, 2002. Fr7 K. A. Frøyshov. An inequality for the h–invariant in instanton Floer theory. Topology, 43:407–432, 2004. Fr13 K. A. Frøyshov. Compactness and gluing theory for monopoles, volume 15 of Geometry & Topology Monographs. Geometry & Topology Publications, 2008. Fr4 K. A. Frøyshov. Monopole Floer homology for rational homology 3–spheres. Duke Math. J., 155:519–576, 2010. Fr14 K. A. Frøyshov. 4–manifolds and intersection forms with local coefficients. J. Diff. Geom., 91:233–259, 2012. Hirsch M. W. Hirsch. Differential Topology. Springer, 1976. HM D. Husemoller and J. Milnor. Symmetric Bilinear Forms. Springer-Verlag, 1973. Kotsch1 D. Kotschick. SO(3)–invariants for 4-manifolds with b_2^+=1. Proc. London Math. Soc., 63(3):426–448, 1991. KM3 P. B. Kronheimer and T. S. Mrowka. Embedded surfaces and the structure of Donaldson's polynomial invariants. J. Diff. Geom., 41:573–734, 1995. KM5 P. B. Kronheimer and T. S. Mrowka. Monopoles and Three-Manifolds. Cambridge University Press, 2007. KM7 P. B. Kronheimer and T. S. Mrowka. Knot homology groups from instantons. J. Topology, 4:835–918, 2011. Jeffrey-Lee-Manifolds-DG Jeffrey M. Lee. Manifolds and Differential Geometry. AMS, 2009. NST1 Y. Nozaki, K. Sato, and M. Taniguchi. Filtered instanton Floer homology and the homology cobordism group. arXiv:1905.04001. Ogawa H. Ogawa. Lower bounds for solutions of differential inequalities in Hilbert space. Proc. AMS, 16:1241–1243, 1965. OS6 P. S. Ozsváth and Z. Szabó. On the Floer homology of plumbed three-manifolds. Geometry & Topology, 7:185–224, 2003. Scaduto2 Ch. W. Scaduto. On definite lattices bounded by a homology 3–sphere and Yang-Mills instanton Floer theory. arXiv:1805.07875. Scaduto1 Ch. W. Scaduto. Instantons and odd Khovanov homology. J. Topology, 8(3):744––810, 2015. University of Oslo, Norway Email: [email protected]
http://arxiv.org/abs/2307.04501v1
20230710114528
A Privacy-Preserving and Accountable Billing Protocol for Peer-to-Peer Energy Trading Markets
[ "Kamil Erdayandi", "Lucas C. Cordeiro", "Mustafa A. Mustafa" ]
cs.CR
[ "cs.CR", "cs.CE" ]
[t] 979-8-3503-9790-1/23/$31.00 2023 IEEE A Privacy-Preserving and Accountable Billing Protocol for Peer-to-Peer Energy Trading Markets This work was supported by EPSRC through EnnCore [EP/T026995/1] and by the Flemish Government through FWO-SBO SNIPPET project [S007619]. K.E is funded by The Ministry of National Education, Republic of Turkey. Kamil Erdayandi1, Lucas C. Cordeiro1 and Mustafa A. Mustafa12 1Department of Computer Science, The University of Manchester, UK 2imec-COSIC, KU Leuven, Belgium Email: {kamil.erdayandi, lucas.cordeiro, mustafa.mustafa}@manchester.ac.uk ======================================================================================================================================================================================================================================================================================================================= This paper proposes a privacy-preserving and accountable billing (PA-Bill) protocol for trading in peer-to-peer energy markets, addressing situations where there may be discrepancies between the volume of energy committed and delivered. Such discrepancies can lead to challenges in providing both privacy and accountability while maintaining accurate billing. To overcome these challenges, a universal cost splitting mechanism is proposed that prioritises privacy and accountability. It leverages a homomorphic encryption cryptosystem to provide privacy and employs blockchain technology to establish accountability. A dispute resolution mechanism is also introduced to minimise the occurrence of erroneous bill calculations while ensuring accountability and non-repudiation throughout the billing process. Our evaluation demonstrates that PA-Bill offers an effective billing mechanism that maintains privacy and accountability in peer-to-peer energy markets utilising a semi-decentralised approach. Billing, Privacy, Accountability, Peer-to-peer Energy Market, Homomorphic Encryption, Blockchain § NOMENCLATURE tocsectionNomenclature 1.20 [π_P2P,π_FiT, π_RT] c_i, p_j, u_k i-th consumer , j-th prosumer, k-th user N_C , N_P, N_U Number of consumers, prosumers, users V^P2P P2P market's traded electricity volume array V^Real Real electricity consumption array π_P2P, π_FiT, π_RT P2P, FiT, Retail price Stat Array of the statements of the users Bal_sup Balances of the supplier inDev Array of the individual deviations of the users Dev^Tot Total deviations of the users KGen_pe(k) Paillier key generation method PK_sup , SK_sup Public, Private (Secret) key pair of Supplier {.}_ℰ Data homomorphically encrypted with PK_sup. H(.) Hash Function § INTRODUCTION §.§ Motivation and Background Peer-to-peer (P2P) energy trading enables users to obtain clean energy at more reasonable prices than traditional suppliers, making it accessible to a wider society <cit.>. It facilitates direct energy exchange between households that harness renewable energy sources (RES) <cit.>. This approach empowers individuals to become active participants in the energy system <cit.>, allowing RES owners to optimise their profits and reduce their bills through trading with other users <cit.>. Although P2P energy trading markets offer various benefits, some challenges hinder their widespread adoption. Firstly, the vast amount of data exchanged can reveal sensitive information about users <cit.>, such as their energy usage habits and lifestyle patterns. Access to this data poses significant privacy risks <cit.> and could potentially violate privacy protection regulations, e.g., GDPR <cit.>. Thus, it is crucial to ensure privacy-preserving data processing and protect data from unauthorised access <cit.>. Secondly, such markets require secure and accountable solutions. However, it is challenging to audit transactions without a tamper-proof system <cit.>. To ensure fair and accurate energy trading, it is also essential to guarantee integrity and verifiability of any data used. Thirdly, often what users commit at P2P markets deviates from what they deliver due to intermittent RES output. Hence, any billing models will need mechanisms to deal with such deviations. §.§ Relevant Literature Within P2P energy trading, two crucial phases are market clearance and billing & settlement <cit.>. Since privacy-preserving market clearing mechanisms have already been explored <cit.>, this paper focuses on the billing phase. Madhusudan et al. <cit.> propose four billing models for P2P energy markets which account for deviations in energy volumes from the users' bids and incorporate individual, social, or universal cost-sharing mechanisms to ensure cost-effectiveness for both consumers and prosumers. Nonetheless, they do not explore user privacy. A privacy-preserving billing protocol that incorporates an individual cost-sharing mechanism has been proposed in <cit.>. However, it relies on a remote server for bill calculations, which poses a risk of a single point of failure. Singh et al. <cit.> propose a method that uses blockchain and homomorphic schemes to protect the confidentiality of user data while enabling efficient data analysis. They do not explore any billing mechanisms. Gür et al. <cit.> propose a system based on blockchain technology and IoT devices to facilitate billing. To ensure data confidentiality, the system employs session keys and stores the encrypted data on the blockchain. However, this is still vulnerable to breaches as unauthorised parties can gain access to these keys, enabling them to access sensitive data. In summary, no prior study on P2P market billing fully satisfies the three essential requirements: protecting user privacy, maintaining strong system accountability, and accommodating variations in user consumption. Neglecting any of these elements undermines the market trust, transparency and fairness, which are essential to their success and sustainability. Furthermore, integrating these three features within a single platform efficiently poses considerable challenges. §.§ Contributions and Organization To address the issues raised in the existing literature, we propose a novel privacy-preserving and accountable billing (PA-Bill) protocol, which effectively mitigate the challenges surrounding security, privacy, accountability, and user consumption variations prevalent in current studies. PA-Bill utilises a universal cost-splitting billing model that mitigates the risk of sensitive information leakage due to individual deviations. It also avoids a single point of failure by performing most calculations locally in a semi-decentralised manner. To preserve privacy, the mechanism employs homomorphic encryption in bill calculations. Moreover, PA-Bill utilises blockchain technology to integrate accountability mechanisms that addresses possible conflicts during the billing calculation process. To minimise privacy leakage, only the hashed version of the data is stored on the blockchain. Finally, PA-Bill can support large communities of 500 households. Unlike other solutions, PA-Bill integrates privacy protection, accountability, and accommodating user consumption variations into a single solution in an efficient way. To the best of our knowledge, no previous work has successfully implemented an efficient billing model that simultaneously preserves privacy, ensures accountability, and effectively handles discrepancies between committed and delivered volume. To mitigate the aforementioned issues in the literature, we propose a Privacy-Preserving and Accountable Billing Mechanism (PA-Bill) which * utilises a universal cost splitting billing model that determines conditions during billing calculations without relying on individual deviations from proposed electricity volumes. This mitigates the risk of sensitive information leakage due to individual deviations. * avoids single point of failure by performing majority of calculations locally in a semi-decentralised way. * provides privacy preserving computation mechanism with Homomorphic Encryption. * utilises blockchain technology to integrate accountability mechanisms that address possible conflicts during the billing calculation process. To minimise privacy leakage, only the hashed version of the data is stored in the blockchain, rather than plaintext or encrypted data. * evaluates the performance for a large community which consists of 500 households. The rest of the paper is structured as follows: Section <ref> outlines the preliminaries. The proposed PA-Bill is presented in Section <ref>. The security analysis of PA-Bill is presented in Section <ref>, while its performance is evaluated in Section <ref>. Finally, Section <ref> concludes the paper. [MM: Start with a paragraph about privacy-preserving billing in smart grid. In this paragraph include at least the following papers.] * `Private Memoirs of a Smart Meter' - 2010 * `Privacy-Preserving Smart Metering with Verifiability for Both Billing and Energy Management' - 2014 * `Privacy-Preserving Smart Metering' - 2011 * `A Practical Smart Metering System Supporting Privacy Preserving Billing and Load Monitoring' - 2012 * `Design and implementation of a secure cloud-based billing model for smart meters as an Internet of things using homomorphic cryptography' - 2017 * `Plug-In Privacy for Smart Metering Billing' - 2011 * `A Privacy-Enhancing Protocol that Provides In-Network Data Aggregation and Verifiable Smart Meter Billing' - 2014 * `Practical Single-pass Oblivious Aggregation and Billing Computation Protocols for Smart Meters' - 2022 * `Verifiable and Privacy-preserving Fine-Grained Data-Collection for Smart Metering' - 2015 * `Secure and privacy-friendly local electricity trading and billing in smart grid' - 2018 [MM: Conclude by saying that the existing work is good, but no P2P markets have been considered, which make the billing models more complex.] [MM: Write a second pargarpg focusing on PP solutions for billing on the P2P markets. End this paragraph by linking it back to the limitations that you plan to address. These two paragraphs should be sufficient. ] § PRELIMINARIES §.§ System Model Our proposed billing protocol, illustrated in Fig. <ref>, involves prosumers, consumers, a trading platform (TP), a distributed ledger/Blockchain (DLT), a referee, and a supplier. Prosumers generate energy through renewables, consume the volume they require, and sell any surplus energy. Consumers solely consume energy. Households have home energy management systems (HEMs) and smart meters (SMs) that measure electricity flows, provide real-time measurements, and facilitate P2P trading for the user. Prosumers and consumers can trade electricity through a P2P market using a trading platform (TP). If necessary, they can also buy or sell electricity from/to a supplier as a backup option. However, P2P trading is more beneficial than relying on the supplier due to pricing considerations <cit.>. Financial reconciliation occurs during settlement cycles (SCs) for users involved in trading. Within each SC, data regarding the actual electricity usage of households and their commitments to trade in the market are stored on DLT. Households calculate their bills locally in a decentralised manner. If a dispute arises, a referee intervenes to resolve it by requesting data from households and retrieving it from DLT. §.§ Threat Model and Assumptions Our threat model comprises untrustworthy and semi-honest entities. Prosumers and consumers who may attempt to violate the protocol specifications and obtain sensitive data of other users are considered to be untrustworthy. Prosumers may try to maximise their revenue, while consumers may aim to minimise their expenses. Semi-honest entities include the TP, referee, and supplier. They adhere to the protocol specifications, but they may still be curious to learn sensitive data of users. SMs are tamper-proof and sealed. Anyone, including their users, can not tamper with them without being detected. Users act rationally by seeking the most cost-effective electricity to buy or sell <cit.>. We assume that the entities communicate over secure and authentic communication channels. §.§ Design Requirements * No single point of failure (SPF): To avoid SPF, calculations and data storage should be distributed <cit.>. * Privacy: Confidentiality of individual users' volumes of energy traded and consumed as well as individual deviation and deviation sign should be provided. * Accountability: Disputes arising from erroneous bill calculations must be addressed in an accountable way to prevent any party from denying responsibility. * Fair deviation cost distribution: cost of P2P market deviation should be split fairly among market participants. §.§ Building Blocks Homomorphic encryption (HE) enables computations to be performed on encrypted data, resulting in encrypted outputs that produce the same results as if the operations were conducted on unencrypted data <cit.>. Specifically, we deploy the Paillier cryptosystem which supports homomorphic addition and scalar multiplication on ciphertexts <cit.>. Our solution ensures the privacy of households by encrypting sensitive information such as energy consumption data per SC. Billing calculations are performed on this encrypted data, thereby preserving the confidentiality of the information. We use blockchain technology to provide accountability by ensuring that transactions are permanently recorded in a decentralised and immutable system with append-only storage. Transactions recorded on a blockchain cannot be altered by design, ensuring that they are accurate and trustworthy <cit.>. § PRIVACY PRESERVING AND ACCOUNTABLE BILLING (PA-BILL) PROTOCOL In this section, we propose a privacy-preserving and accountable billing protocol for P2P energy market where users' actual energy consumption may differ from the volumes they committed. It protects sensitive household information and enables system entities to verify accurate billing calculations. §.§ PA-Bill Overview The process of PA-Bill protocol is illustrated in Fig. <ref>, which includes interactions between the entities. The system utilises the public-private key pair of the supplier for all homomorphically encrypted calculations. A distinct set of HE keys, namely PK_sup and SK_sup are generated for each billing month. Additionally, each month the consumers and prosumers are paired together to perform accountable calculations. In the energy trading model, users send homomorphically encrypted bid-offer data to the TP, which calculates the final trading price π_P2P and the amount of energy V^P2P[u_k] that each user u_k will trade via the P2P market, as in <cit.>. During each SC, π_P2P is publicly released. V^P2P[u_k] is shared with related paired users for future calculations, and its hash is stored on the DLT for future verification. SMs measure their users' actual imported/exported electricity and transmit the encrypted version (V^Real[u_k]) to relevant users. The hash of this encrypted version is also stored on the DLT. After sending and storing related data for billing, the calculation of bills among prosumers and consumers is performed in three stages in a privacy-preserving way. Firstly, individual deviations of users are calculated. Consumers calculate the individual deviations of prosumers and vice versa. Secondly, the total deviations of consumers and prosumers are calculated by six user selected from consumers and prosumers. Thirdly, statements (bills/revenues) of users are calculated. To protect sensitive data such as energy consumed/traded, and individual energy deviations of households, our work utilises HE scheme to process data while preserving privacy. However, it is crucial to design the billing algorithm in such a way that it avoids any indirect leakage of private information despite the use of encryption. Traditional billing methods <cit.> have the potential to expose confidential information by using individual deviations between actual and committed energy volumes to determine the “conditions" in calculating bills. This enables inferences to be made about whether the actual electricity consumption volume is lower or higher than the committed data. To address this issue, we propose a privacy-preserving and accountable cost-splitting billing that uses total deviations of consumers and prosumers rather than individual deviations to determine billing conditions. In the event of a dispute, the referee requests the necessary data from households, as well as it retrieves the hash of the previously stored data from DLT (to ensure the accuracy of the data requested from households) to settle the dispute. In this case, the referee corrects erroneous computations of the pair of customer and prosumer whose calculations do not match each other and identifies the responsible party in the pair. The responsible party is penalised, incentivising them to act truthfully, which would otherwise result in penalties. Besides, the referee can directly calculate the supplier's balance since the calculations do not involve any confidential information. Finally, at the end of the month, final bills and revenues, and the balance of the supplier are released with the help of the referee and the private homomorphic key of the supplier. §.§ Technical Details of PA-Bill At the start of each billing period (e.g., a month), the following two steps (1-2) are carried out. §.§.§ Generation of Keys The supplier generates a public-private HE (Paillier) key pair: KGen_pe(k) PK_sup, SK_sup. §.§.§ Matching customers and prosumers The referee conducts a random matching process in which each consumer is paired with a list of prosumers and vice versa. The number of users in the lists may exceed one or be zero in cases where N_C > N_P or N_C < N_P, while the lists contain only one user if N_C = N_P. Here, N_C and N_P denote the respective number of customers and prosumers. The function M(u_k) returns the list of users that have been matched to the user u_k. At each SC, the following six steps (3–8) are carried out. §.§.§ Transfer and Storage of P2P Traded Data TP makes the P2P trading price public by storing it at DLT in plaintext. For each u_k, TP transmits homomorphically encrypted value of traded volume V^P2P[u_k] to user u_k and to users in M(u_k). The privacy-preserving calculation of the encrypted traded values by user u_k (V^P2P[u_k]) can be performed after the transmission of bids-offers in a homomorphically encrypted format. It is assumed the TP has already calculated V^P2P[u_k]. Once the data has been transmitted to relevant parties, the TP also hashes the homomorphically encrypted traded volume of user u_k, i.e., H(V^P2P[u_k]), and stores the result at the DLT, together with a timestamp and ID of u_k. §.§.§ Collection, Transfer and Storage of SM Data At the end of each SC, each SM measures the real volume of energy imported from (or exported to) the grid by their user, i.e., V^Real[u_k], encrypts it with PK_sup and hashes it, i.e., H(V^Real[u_k]). It then stores the hash value to DLT with timestamp and ID of u_k. The user SM also stores V^Real[u_k] as well as sends it to the users in M(u_k). §.§.§ Calculation of Individual Deviations in this step, each user u_k calculates the individual deviations (inDev) from the volume of energy they committed for themselves and their corresponding matched users in M(u_k) (see Alg. <ref>). To calculate inDev, each user u_k subtracts their committed volume from the volume measured by their SM for themselves (u_k) and the users m_l in M(u_k). The calculations are carried out in homomorphically encrypted format. The espective encrypted results inDev and inDev_M are sent to the referee. After the referee receives the encrypted individual deviations from users, it checks whether the computations have been done correctly. For each user and its matched user, the referee receives four encrypted results. The user u_k provides its own encrypted result, inDev[u_k], as well as that of its matched user. For the matched consumer c_i and prosumer p_j, the referee checks if the calculated values are the same. In order to achieve this, the referee subtracts these two calculated values from each other in a homomorphically encrypted format. The result of this subtraction is then sent to the supplier who has the private key to perform homomorphic encryption operations. The supplier decrypts the result of subtraction and sends it back to referee. The referee checks whether the received value from the supplier is zero or not. If it is zero, it considers the calculations to be accurate and proceeds to store the hash of the resulting computation of user u_k (not that of the matched user) in DLT along with the corresponding ID and timestamp of u_k, to facilitate future verification. Otherwise (if the received result is not zero), the referee intervenes to correct any erroneous calculations and identify the responsible party. To do so, the referee requests V^Real and V^P2P from the users, checks their correctness by hashing and comparing them with the previously stored hashes in blockchain by TP and SMs. If the encrypted data received from the users is accurate, the referee recalculates the inDev in encrypted format for c_i and p_j, whose results were incorrect. Next, the referee follows the same process of subtracting the calculated values and having the result decrypted by the supplier to compare the recalculated outcome with the values obtained from c_i and p_j. The referee then identifies the party that is accountable for the mismatch. §.§.§ Calculation of Total Deviations To calculate total demand and supply deviations, the referee selects three consumers and three prosumers. Each consumer c_i sends their respective inDev[c_i] to the selected prosumers and vice versa. Selected prosumers and consumers verify the received encrypted deviations by hashing and comparing them with stored hashes in DLT. Then, selected prosumers sum up inDev[c_i] for each c_i to calculate Dev_C^Tot (eq. <ref>) and selected consumers do the same for each p_i, (eq. <ref>). Dev_C^Tot∑_i=0^N_C-1inDev_C[c_i] Dev_P^Tot∑_j=0^N_C-1inDev_P[p_j] After calculating Dev_C^Tot and Dev_P^Tot, selected prosumers and consumers send them to a referee for verification. If the results match, the referee sends them to the supplier. The supplier then decrypts the results and makes them publicly available by storing Dev_C^Tot and Dev_P^Tot into DLT. If the results do not match, the referee corrects any erroneous calculations and identifies the responsible party. This is done by recalculating (eq. <ref>) and (eq. <ref>) in encrypted format after requesting and verifying the necessary data via DLT. §.§.§ Calculation of Bills and Rewards we present our proposed privacy-preserving and accountable universal cost-splitting billing model that employs total deviations instead of individual deviations to establish billing conditions. The proposed billing model is presented in Alg. <ref>. The algorithm takes as input V^P2P, V^Real, π_P2P, π_RT and π_FiT and calculates the bills/revenues of consumers/prosumers. The algorithm outputs Statements Stat[u_k], Stat_M[u_k] for user u_k and its matched users in M(u_k), respectively. Stat[u_k] indicates the bill of u_k when u_k is a consumer and it stands for the revenue of u_k if u_k is a prosumer. We have devised universal formulas such as Stat[u_k] which is applicable to both consumers and prosumers. The algorithm works in three modes based on the difference between total deviations of consumers and prosumers, and proceeds as follows. If Dev_P^Tot = Dev_C^Tot, prosumers have generated enough electricity to meet the demand of customers, resulting in a balanced P2P market. In this case, individuals can purchase the required energy from other households and sell their excess energy to other households at π_P2P in addition to their commitments in the P2P market rather than relying on suppliers. Energy sharing between households to compensate for deviations is advantageous for both consumers and prosumers, as they can exchange energy at a price of π_P2P, which is higher than π_FiT and lower than π_RT, compared to relying on suppliers to buy electricity at π_RT and sell electricity at π_FiT. The statements for each user u_k and for paired users in M(u_k) are calculated between ln. 3-6 in the algorithm. If Dev_P^Tot < Dev_C^Tot, there is a shortage of electricity in the P2P market as prosumers have not generated enough electricity to meet customer demand. If there is a shortage of electricity that cannot be compensated by other users, the only option is to purchase it from the supplier at π_RT. Users with a shortage of electricity can buy it at this price, while households with a surplus can sell it at π_RT instead of selling it to the supplier for π_FiT, which is advantageous for prosumers. In accordance with this, the statements for each user u_k and for paired users in M(u_k) are calculated between ln. 9-11 in the algorithm. If Dev_P^Tot > Dev_C^Tot, there is excess electricity in the P2P market as prosumers have generated more electricity than is needed to meet customer demand. In this case, consumers can purchase energy from prosumers at π_P2P to compensate for their energy shortage due to deviation. The total revenue of the prosumers is distributed among them in proportion to the excess energy they provided. To calculate this, the total revenue generated by prosumers due to excess energy is first determined. Some of the excess energy is sold to consumers with a shortage of electricity at π_P2P, while the remainder is sold to the supplier at π_FiT. Therefore, the total revenue of prosumers, TotRev_P, can be calculated as TotRev_P =(Dev_C^Tot·π_P2P + (Dev_P^Tot - Dev_C^Tot) ·π_FiT) The total revenue TotRev_P is distributed among the prosumers in proportion to inDev_P[u_k] /Dev_P^Tot. In accordance with this, Alg. <ref> calculates statements for each user u_k and for paired users in M(u_k) between ln. 16-19, if u_k is a consumer. Otherwise, the statements are calculated between ln. 21-24. At the end of the algorithm, statements are accumulated on stat^Tot in encrypted format for u_k and user in M(u_k) assuming that stat^Tot was set to zero before the first SC. After each pair calculates their statements bilaterally, they send the results to the referee for verification. If the results do not match, the referee intervenes to correct any erroneous calculations and identify the responsible party. This is done by running Alg. <ref> for the unmatched pairs after requesting and verifying the required data for computation via DLT. §.§.§ Calculating the of Balance of the Supplier The referee calculates the supplier's balance using only public information, and does so in a non-encrypted format. In the case where Dev_P^Tot = Dev_C^Tot, Bal_sup is set to zero (Bal_sup 0) since there is no excess or shortage of electricity in the P2P market to compansate from the supplier. If (Dev_P^Tot > Dev_C^Tot), there is excess energy in P2P market and the supplier purchases it at FiT price π_FiT, resulting in a negative balance for the supplier to pay. Bal_sup is calculated as the negative product of the total excess energy (Dev_P^Tot - Dev_C^Tot) and π_FiT, i.e. Bal_sup -(Dev_P^Tot - Dev_C^Tot)·π_FiT If (Dev_P^Tot < Dev_C^Tot), there is a shortage of energy in P2P market that needs to be compensated by the supplier at retail price π_RT. Bal_sup is calculated as the product of supplied energy (Dev_P^Tot - Dev_C^Tot) and π_RT, i.e. Bal_sup (Dev_C^Tot - Dev_P^Tot)·π_RT. At each SC, the resulting Bal_sup is accumulated to the total supplier balance except when the SC is equal to zero where Bal^Tot_sup is set to Bal_sup. The next step is carried out at the end of each billing period. §.§.§ Transfer and Announcement of Bills, Revenues and Supplier Balance The final accumulated monthly statements of households are not protected from the supplier, as payments must be made, the referee sends encrypted statements consisting of bills and revenues to the supplier. The supplier then decrypts these statements using their HE private key and hashes and stores the decrypted version on the DLT system for future verification during the payment process. The supplier's balance is also hashed and stored on the DLT. § SECURITY, PRIVACY AND ACCOUNTABILITY ANALYSIS The PA-Bill protocol addresses the security concern of avoiding SPF by distributing the majority of calculations and data storage locally. It addresses privacy concerns by utilising HE to encrypt sensitive user data such as V^Real and V^P2P, ensuring that sensitive information remains confidential during billing computations. In addition, the PA-Bill protocol employs a cost-splitting mechanism that utilises the total deviations of users rather than individual deviations to calculate billing modes. This method avoids indirect privacy leakage of individual deviations. It employs Blockchain technology to create an unalterable record of the hashes of essential data necessary for billing computations. This ensures the verification and integrity of critical data, thereby enabling all parties to be held accountable for their actions during the billing process. § PERFORMANCE EVALUATION In this section, we demonstrate that PA-Bill achieves computational efficiency without compromising privacy, accountability, or the ability to accommodate user consumption variations. PA-Bill effectively addresses these critical aspects while maintaining a level of computational efficiency. We prove our claims through both theoretical analysis and experiments. §.§ Theoretical Analysis The time complexity of the method is mainly determined by the input parameters of Alg. <ref> and Alg. <ref>, which include the number of users (N_U). The time required to perform the algorithm grows depending on the input size. Specifically, the nested double loops in Alg. <ref> and Alg. <ref> lead to a quadratic time complexity of n^2 for cases where in cases where N_C > N_P or N_C < N_P, the time complexity is reduced to n with a single iteration in the inner loop when N_C = N_P where each user has only one matched user. The time complexity of the calculations in eq. <ref> and eq. <ref> is n, where n depends on the inputs N_C and N_P, respectively. §.§ Experimental Results We evaluate the performance of PA-Bill by running simulations on a PC with Intel Core i5 CPU @ 2GHz CPU and 16GB of RAM to demonstrate its efficiency. We utilise the SHA3-256 algorithm for hashing and the Paillier cryptosystem for homomorphic encryption with 2048-bit keys. These operations were implemented using the Python libraries hashlib and phe, respectively. We utilised Ethereum network to prototype the blockchain platform. To deploy and test Ethereum for our project, we used Ganache[https://www.trufflesuite.com/ganache], wrote smart contracts in Solidity[https://solidity.readthedocs.io/en/v0.8.7/], and compiled them on Remix[https://remix.ethereum.org/]. To connect our project with the Ethereum network, we utilised the Python Web3[https://web3py.readthedocs.io/en/stable/] library. As we utilised existing tools to design the blockchain platform, we did not conduct a separate performance assessment of the platform itself. Our previous work <cit.> is deployed as electricity trading platform, so we do not reevaluate it in this context either. Instead, our primary focus lies in evaluating the performance of the privacy and accountable billing model. The billing model simulations were conducted on a sample of 500 users, consisting of 250 consumers and 250 prosumers. We measured PA-Bill's execution time (ET) for computationally intensive components in two scenarios: worst-case (every household makes an incorrect bill calculation (unintentionally or maliciously), thus requiring an intervention from the referee) and best-case (all households make correct calculations, hence no referee intervention is deployed). The SC is set to be one hour. Table <ref> demonstrates the average execution time per SC for PA-Bill components, computed over a one-month billing period comprising 720 SCs (24 SCs per day). The execution time which results in milliseconds for both worst-case and best-case scenarios, tested with a large group of 500 users, indicate that our proposed billing protocol offers a computationally efficient solution for PA-Bill. § CONCLUSION In this work, we proposed PA-Bill, a privacy-preserving and accountable billing protocol that addresses security, privacy, and accountability issues in P2P markets at the billing and settlements stage. PA-Bill utilises a universal cost-splitting billing model, local semi-decentralised calculation, and Homomorphic Encryption for privacy protection. Blockchain technology is deployed for accountability mechanisms that resolve conflicts during billing calculation. PA-Bill is evaluated on a community of 500 households. In our future work, we plan to investigate network constraints. IEEEtran Potential Venues: * https://www.sest2023.org/ * https://smartnets.ieee.tn/ * https://sites.google.com/view/smace2023/ In the case of a rejection * https://hpcn.exeter.ac.uk/trustcom2023/ * https://brains.dnac.org/2023/ * https://attend.ieee.org/isc-2023/
http://arxiv.org/abs/2307.04409v1
20230710081749
Violation of a Leggett-Garg inequality using ideal negative measurements in neutron interferometry
[ "Elisabeth Kreuzgruber", "Richard Wagner", "Niels Geerits", "Hartmut Lemmel", "Stephan Sponar" ]
quant-ph
[ "quant-ph" ]
[email protected] [email protected] ^1Atominstitut, TU Wien, Stadionallee 2, 1020 Vienna, Austria ^2Institut Laue-Langevin, 38000, Grenoble, France =800=800 We report on an experiment that demonstrates the violation of a Leggett–Garg inequality (LGI) with neutrons. LGIs have been proposed in order to assess how far the predictions of quantum mechanics defy `macroscopic realism'. With LGIs, correlations of measurements performed on a single system at different times are described. The measured value of K =1.120±0.007, obtained in a neutron interferometric experiment, is clearly above the limit K=1 predicted by macro-realistic theories. Violation of a Leggett–Garg inequality using ideal negative measurements in neutron interferometry Stephan Sponar^1 August 12, 2023 =================================================================================================== Introduction.—The question whether measurable quantities of a quantum object have definite values prior to the actual measurement is a fundamental issue ever since quantum theory has been introduced more than a century ago. Examples include Bell's inequality <cit.>, which sets bounds on correlations between measurement results of space-like separated components of a composite (entangled) system. A violation of Bell's inequality thus demonstrates that certain predictions of quantum mechanics cannot be reproduced by realistic theories, more precisely, by local hidden variable theories (LHVT). Another prime example is found in the Kochen-Specker theorem <cit.>, which stresses the incompatibility of quantum mechanics with a larger class of hidden-variable theories, known as noncontextual hidden-variable theories (NCHVTs). Here it is assumed that the result of a measurement of an observable is predetermined and independent of a suitable (previous or simultaneous) measurement of any other compatible (co-measurable or commuting) observable, i.e., the measurement context. While both, Bell's inequality and tests of the Kochen-Specker theorem, require composite or multiple spatially-separated systems Leggett-Garg inequalities (LGIs) <cit.> study temporal correlations of a single system, therefore they are often referred to as Bell inequalities in time. Violation of a Bell inequality is a direct witness of entanglement - a very specific feature of quantum mechanics. Contrary, in the case of LGIs the violation occurs due to the coherent superposition of system states, which is essentially the most fundamental property of quantum mechanics. In other words LGIs quantify coherence in quantum systems and can consequently be seen as a measure or test of quantumness. Leggett-Garg inequalities were proposed in 1985 <cit.> in order to assess whether sets of pairs of sequential measurements on a single quantum system can be consistent with an underlying macro-realistic theory <cit.>. Within the framework of a macro-realistic theory a single macroscopic system fulfills the following two assumptions of macrorealism measured at successive times: (A1) at any given time the system is always in only one of its macroscopically distinguishable states, and (A2) the state of the system can be determined in a non-invasive way, meaning, without disturbing the subsequent dynamics of the system. Quantum mechanics predicts the violation of the inequalities since it contradicts with both assumptions (A1) and (A2). The (quantum) system under observation has to be measured at different times. Correlations that can be derived from sequences of this measurements let us formulate the LGI. The result of these correlation measurements either confirm the absence of a realistic description of the system or the impossibility of measuring the system without disturbing it <cit.>. This will also refuse a well-defined pre-existing value of a measurement. Recent violations of LGI have been observed in various systems, including photonic qubits <cit.>, nuclear spins in a diamond defect center<cit.>, superconducting qubits in terms of transoms <cit.> and flux qubits <cit.>, nuclear magnetic resonance <cit.>, and spin-bearing phosphorus impurities in silicon <cit.>. Proposed schemes for increasing violations of Leggett-Garg inequalities range from action of an environment on a single qubit in terms of generic quantum channels <cit.> to open many-body systems in the presence of a nonequilibrium <cit.>. In a recent paper <cit.> the authors propose to test a violation of the Leggett-Garg inequality due to the gravitational interaction in a hybrid system consisting of a harmonic oscillator and a spatially localized superposed particle <cit.>, aiming to probe the quantumness of gravity <cit.>. The violation of an LGI in an interferometric setup has been proposed in literature theoretically for electrons in <cit.>. The requirement of non-invasive measurements from (A2) is realized in most experiments by utilizing the concept of weak measurements, or by introducing an ancilla system, as implemented in <cit.>. Note that even a weak measurement in practice can never be completely non-invasive (due to a non-vanishing measurement strength) and the preparation of the ancilla system will also always be imperfect. However, the experimental procedure from <cit.> realizes ideal negative measurements in an interferometer experiment in order to fulfill the requirement of non-invasive measurements from (A2) without the need for an ancilla. In this Letter, we present a neutron interferometric experiment, demonstrating a violation of the LGI. In our measurement scheme the single system is represented by the neutron's path in an interferometer. A respective observable is defined and measured non-invasively according to the LGI protocol. Leggett–Garg inequality.—For dichotomous variables Q_i, accounting for two macroscopically distinguishable states, having outcomes q_i=±1, the correlation function for measurements at times t_i, t_j is given by C_ij=⟨ Q_i Q_j⟩=∑_q_i q_j=± q_i q_j P(q_i(t_i),q_j(t_j)), where P(q_i(t_i),q_j(t_j)) denotes the joint probability of obtaining the measurement results q_i at time t_i and q_j at time t_j. Considering Eq.(<ref>) for three experimental sets with i,j∈{1,2,3} yields the LGI K ≡ C_21 + C_32-C_31, where K denotes the Leggett-Garg correlator, with limits -3≤ K ≤ 1. Since the three correlators are derived from probabilities with |C_ij|≤ 1, the lower limit cannot be violated. However, quantum mechanics allows for a violation of the upper bound. In a two-level system, the maximum obtainable violation is K=1.5 <cit.>. The basic idea behind the experimental procedure as proposed by Emary et al. in <cit.>, is to map the temporal structure (or measurement time t_i) of LGI onto real-space coordinates, more precisely onto three distinct regions of the interferometer, indicated by the index α∈{1,2,3}, cf. Fig. <ref>. Within each region the two paths of the interferometer constitute a qubit. The measurement of the qubit's state, denoted as q_i=±1, therefore results in a “which-way” measurement <cit.> in the particular region of interest. While a click of a detector in e.g. the + arm of region 2 (q_2=+1) is a strongly invasive measurement, on the other hand the absence of a detector response implies q_2=-1 and does not disturb the system at all. It accounts for the required non-invasive measurement (A2) in terms of an ideal negative measurement. In our neutron interferometric realization of <cit.> neutrons enter the IFM via the + port of region 1. Hence, it is not necessary to measure in region 1 and the noninvasive measurability is granted. The first plate of the IFM consists of a tunable beamsplitter characterized by parameter ϑ_A, which is schematically illustrated in Fig. <ref>. The theoretical maximum of K=1.5 is obtained for ϑ_A=ϑ_B=π/3 and phase shift χ=0. However, in our setup with fixed ϑ_B=π/2 (usual 50:50 beamsplitter), the maximal possible violation is K=√(2) (for ϑ_A=π/4). We define P_α±,β±(n_α,n_β) as the joint probability that two detectors placed at position α± and β± respectively detect (n=1) or don't detect a neutron (n=0), where α and β specify the region and ± the path. Then the correlator, as defined in Eq.(<ref>), between regions α and β is given by C_αβ=∑_q_α,q_β=±q_α q_β P_α q_α,β q_β(1,1). Hence the correlation function for regions 1 and 3, denoted as C_31, can simply be expressed as C_31=P_3+,1+(1,1)-P_,3-,1+(1,1), since the neutrons always enter from 1+. Therefore, the correlation function C_31 can also be expressed in terms of mariginal probabilities as C_31=P_3+(1)-P_3-(1). Although not particularly necessary here, it is instructive to express C_31 in terms of ideal negative measurements as C_31= ∑_q_1,q_3=±q_1 q_3 P_3 q_α(1)(1-P_1q_β(0)) =-∑_q_1,q_3=±q_1 q_3 P_1q_2,3q_3(1,0), since P_1q_1(0)=1-P_1q_1(1). A similar expression gives the correlator C_21=P_1+,2+(1)-P_1+,2-(1) which is measured with detectors directly placed in region 2, shown in Fig. <ref> (a). For C_32 all four terms of the sum from Eq.(<ref>) contribute, taking both paths of section 2 into account. C_32=∑_q_2,q_3=±q_2 q_3 P_3q_3,2q_2(1,1) Using again P_2q_2(0)=1-P_2q_2(1) we write the sum as C_32=-∑_q_2,q_3=±q_2 q_3 P_3q_3,2q_2(1,0) in order to account for the non-invasive or ideal negative measurement in section 2. The two pobabilities P_3±,2-(1,0) are determined by counting the neutrons in path 3+ and 3- respectively under the condition that they have not been counted in pah 2-. The latter is ensured by placing a beam blocker in path 2-, cf. Fig. <ref>(b). The other two pobabilities are measured similarly as shown in Fig. <ref>(c). The correlators according to <cit.> for the regions in our setup are calculated as follows C_21= cosϑ_A C_32= cosϑ_B C_31= cosϑ_A cosϑ_B - cosχsinϑ_A sinϑ_B K= cosϑ_A+cosϑ_B-cosϑ_A cosϑ_B + cosχsinϑ_A sinϑ_B, which in our setup, with fixed sinϑ_B=π/2, K becomes K=cosϑ_A + cosχsinϑ_A. Figure <ref> shows the regions in the parameter space (ϑ_A,χ) of our experimental LGI test (with fixed value ϑ_B=π/2), where it is in theory possible to violate the LGI with a value K=√(2). ϑ_A represents the mixing angle of the first interferometer plate, and χ the phase shifter angle. The resulting K values are shown in green for areas where no violation is possible, and in orange for a possible violation of the LGI. The dashed red line indicates our measurement result in an ideal interferometer. Neutron interferometer setup.—Neutron interferometry <cit.> provides a powerful tool for investigation of fundamental quantum mechanical phenomena. Entanglement between different degrees of freedom (DOF), e.g., the neutron’s spin, path, and energy DOF has been confirmed, and the contextual nature of quantum mechanics has been demonstrated successfully <cit.>. In more recent experiments the concept of weak measurements and weak values has been utilized for direct state reconstruction <cit.>, demonstration of the canonical commutator relation <cit.> and studies of which way information <cit.>. The experiment was carried out at the neutron interferometer instrument S18 at the high-flux reactor of the Institute Laue-Langevin (ILL) in Grenoble, France (the experimental data can be found on the ILL data server under <cit.>. A monochromatic unpolarized neutron beam with mean wavelength λ=1.91Å (δλ/λ∼0.02) and 3 × 3 mm^2 beam cross section was used to illuminate the interferometer. In order to observe a violation of an LGI in an interferometric experiment, it is necessary to implement a non-50:50 beam splitter at the first plate of the interferometer. This is achieved by placing a partial absorber behind the first interferometer plate in one of the neutron paths. The absorber is an Indium slab, about 3 thick, placed in path I, resulting in an intensity ratio between paths I and II of about 10:90. The interferometer itself is a symmetric three-plate silicon perfect crystal (triple Laue type), with a plate thickness of 3 and a length of 140. A schematic illustration of the interferometric setup is given in Fig. <ref>. To obtain interference fringes, a 5 Aluminium phase shifter was used. Additional beam blockers for the detection of single path intensities were made of Cadmium. Both the `O' and `H' detectors outside the interferometer and the additional detector for C_21 measurements were ^3He proportional counting tubes. Determination of correlators C_31 and C_21 is straightforward. In both cases it is not necessary to measure non-invasively, since no subsequent measurement on the same state is performed. For C_31, the measurement is that of a standard interferogram Fig. <ref>, with measurement time 180 seconds per phase shifter position. The correlator C_31 is calculated via C_31=N_3+1+(χ)-N_3-1+(χ)/N_3+1+(χ)+N_3-1+(χ), where N_3+1+(χ) denotes the counts in the H detector and N_3-1+(χ) the counts in the O detector. Due to the cosine behaviour of the recorded interferogram, this correlator is dependent on the position χ of the phase shifter. For the largest possible violation, the maximum counts in O and minimum in H are used, which corresponds to the position χ=2 n π (where n∈ℕ_0) in Fig. <ref>. Similarly, the correlator C_21 is calculated as C_21=N_2+1+-N_2-1+/N_2+1++N_2-1+ and is performed as a transversal scan with a pencil-size He-3 detector mounted on a translation stage in region 2 of the interferometer, with measurement time 300 seconds per detector position. Moving first through path I and then through path II, the resulting neutron counts are shown in Fig. <ref>, where the separation between both paths is also clearly visible. The N_2i1+ are the neutron counts in the peak of the respective Gaussian fit to the intensity profiles. For correlator C_32, however, it is crucial to measure non-invasively. This is done by measuring the absence of a neutron in a given path due to the Cd blocker, meaning that the neutron has to take the path without the Cd blocker. This is represented by the minus sign in Eq. (<ref>). Four measurements are performed, with each of the paths blocked in turn and the resulting intensity in detectors O and H recorded for a measurement time of 600 seconds. These results are shown in Fig. <ref>. C_32 becomes C_32=N_3+2-+N_3-2+-N_3+2+-N_3-2-/N_3+2-+N_3-2++N_3+2++N_3-2-, with N_3+2- and N_3+2+ the neutron counts in the H detector with blocked path II and path I, respectively, and likewise for the O detector in N_3-2±. Results.—In order to demonstrate the experimental violation of the Leggett–Garg inequality, we calculate the correlator K, Eq. (<ref>). The resulting curve is shown in Fig. <ref>, with the maximum at a phase shift of χ=0. With the Indium absorber in path I of the interferometer, a violation of the limit K=1 is clearly visible (Fig. <ref>(a)). Our results show a significant violation of the LGI by 18 standard deviations σ (denoted as n_σ=18) at the maximum, K =1.120±0.007. The violation is visible over a wide range of phase shifter values χ. Numeric values of the individual correlators C_ij and the final value of K in case of the maximal violation of the LGI are presented in Tab. <ref>. For comparison, Fig. <ref>(b) shows the same measurement procedure for a symmetric beam splitter (ϑ_A=π/2), i.e. without absorber, where no violation is possible, resulting in K=0.540±0.023. Concluding remarks and discussion.—Our measurement results demonstrate a violation of an LGI by n_σ=18.0, while the absorberless measurements show no violation. Hence we conclude that neutrons in an interferometer must be understood quantum mechanically. An even higher violation can be achieved when the signs in region 3 are switched, and detector O becomes 3+, detector H 3-. The correlators C_31 and C_32 have to be recalculated accordingly, resulting in K=1.162±0.006 with n_σ=28. This `additional' violation is due to the asymmetric nature of the perfect crystal interferometer. Since successive reflections on the crystal lamellas enhance the reflectivity <cit.> the H detector always receives some phase-independent intensity offset. The detection loophole is closed due to the high efficiency of our neutron detectors, close to unity. The fair sampling assumption is needed, especially for the correlator C_21, which is the case for a wide range of experiments of this kind, since simultaneous detection of everything is impossible. Finally, we want to emphasize that the interferometric scheme applied in the present work is not limited neutrons, but is in fact completely general and can be used for any quantum particle with nonzero or even zero mass. This work was supported by the Austrian science fund (FWF) Projects No. P 30677 and No. P 34239. 30 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Bell(1964)]Bell64 author author J. S. Bell, title title On the Einstein-Podolsky-Rosen paradox, @noop journal journal Physics (Long Island City, N.Y.) volume 1, pages 195 (year 1964)NoStop [Bell(1966)]Bell66 author author J. S. Bell, title title On the problem of hidden variables in quantum mechanics, https://doi.org/10.1103/RevModPhys.38.447 journal journal Rev. Mod. Phys. volume 38, pages 447 (year 1966)NoStop [Kochen and Specker(1967)]Kochen67 author author S. Kochen and author E. P. Specker, title title The problem of hidden variables in quantum mechanics, @noop journal journal J. Math. Mech. volume 17, pages 59 (year 1967)NoStop [Leggett and Garg(1985)]leggett_quantum_1985 author author A. J. Leggett and author A. Garg, title title Quantum mechanics versus macroscopic realism: Is the flux there when nobody looks?, https://doi.org/10.1103/PhysRevLett.54.857 journal journal Phys. Rev. Lett. volume 54, pages 857 (year 1985)NoStop [Emary et al.(2014)Emary, Lambert, and Nori]emary_leggettgarg_2014 author author C. Emary, author N. Lambert, and author F. Nori, title title Leggett–Garg inequalities, https://doi.org/10.1088/0034-4885/77/1/016001 journal journal Rep. Prog. Phys. volume 77, pages 016001 (year 2014)NoStop [Ruskov et al.(2006)Ruskov, Korotkov, and Mizel]Ruskov06 author author R. Ruskov, author A. N. Korotkov, and author A. Mizel, title title Signatures of quantum behavior in single-qubit weak measurements, https://doi.org/10.1103/PhysRevLett.96.200404 journal journal Phys. Rev. Lett. volume 96, pages 200404 (year 2006)NoStop [Jordan et al.(2006)Jordan, Korotkov, and Büttiker]Jordan06 author author A. N. Jordan, author A. N. Korotkov, and author M. Büttiker, title title Leggett-garg inequality with a kicked quantum pump, https://doi.org/10.1103/PhysRevLett.97.026805 journal journal Phys. Rev. Lett. volume 97, pages 026805 (year 2006)NoStop [Dressel et al.(2011)Dressel, Broadbent, Howell, and Jordan]Dressel11 author author J. Dressel, author C. J. Broadbent, author J. C. Howell, and author A. N. Jordan, title title Experimental violation of two-party Leggett-Garg inequalities with semiweak measurements, https://doi.org/10.1103/PhysRevLett.106.040402 journal journal Phys. Rev. Lett. volume 106, pages 040402 (year 2011)NoStop [Goggin et al.(2011)Goggin, Almeida, Barbieri, Lanyon, O’Brien, White, and Pryde]Goggin11 author author M. E. Goggin, author M. P. Almeida, author M. Barbieri, author B. P. Lanyon, author J. L. O’Brien, author A. G. White, and author G. J. Pryde, title title Violation of the Leggett–Garg inequality with weak measurements of photons, https://doi.org/10.1073/pnas.1005774108 journal journal Proc. Natl. Acad. Sci. USA volume 108, pages 1256 (year 2011)NoStop [Waldherr et al.(2011)Waldherr, Neumann, Huelga, Jelezko, and Wrachtrup]Waldherr11 author author G. Waldherr, author P. Neumann, author S. F. Huelga, author F. Jelezko, and author J. Wrachtrup, title title Violation of a temporal Bell inequality for single spins in a diamond defect center, https://doi.org/10.1103/PhysRevLett.107.090401 journal journal Phys. Rev. Lett. volume 107, pages 090401 (year 2011)NoStop [Palacios-Laloy et al.(2010)Palacios-Laloy, Mallet, Nguyen, Bertet, Vion, Esteve, and Korotkov]Palacios10 author author A. Palacios-Laloy, author F. Mallet, author F. Nguyen, author P. Bertet, author D. Vion, author D. Esteve, and author A. N. Korotkov, title title Experimental violation of a bell's inequality in time with weak measurement, https://doi.org/10.1038/nphys1641 journal journal Nat. Phys. volume 6, pages 442 (year 2010)NoStop [Knee et al.(2016)Knee, Kakuyanagi, Yeh, Matsuzaki, Toida, Yamaguchi, Saito, Leggett, and Munro]Knee16 author author G. C. Knee, author K. Kakuyanagi, author M.-C. Yeh, author Y. Matsuzaki, author H. Toida, author H. Yamaguchi, author S. Saito, author A. J. Leggett, and author W. J. Munro, title title A strict experimental test of macroscopic realism in a superconducting flux qubit, https://doi.org/10.1038/ncomms13253 journal journal Nat. Commun. volume 7, pages 13253 (year 2016)NoStop [Athalye et al.(2011)Athalye, Roy, and Mahesh]Athalye11 author author V. Athalye, author S. S. Roy, and author T. S. Mahesh, title title Investigation of the Leggett-Garg inequality for precessing nuclear spins, https://doi.org/10.1103/PhysRevLett.107.130402 journal journal Phys. Rev. Lett. volume 107, pages 130402 (year 2011)NoStop [Souza et al.(2011)Souza, Oliveira, and Sarthour]Souza11 author author A. M. Souza, author I. S. Oliveira, and author R. S. Sarthour, title title A scattering quantum circuit for measuring Bell's time inequality: a nuclear magnetic resonance demonstration using maximally mixed states, https://doi.org/10.1088/1367-2630/13/5/053023 journal journal New J. Phys. volume 13, pages 053023 (year 2011)NoStop [Knee et al.(2012)Knee, Simmons, Gauger, Morton, Riemann, Abrosimov, Becker, Pohl, Itoh, Thewalt, Briggs, and Benjamin]Knee2012 author author G. C. Knee, author S. Simmons, author E. M. Gauger, author J. J. Morton, author H. Riemann, author N. V. Abrosimov, author P. Becker, author H.-J. Pohl, author K. M. Itoh, author M. L. Thewalt, author G. A. D. Briggs, and author S. C. Benjamin, title title Violation of a Leggett–Garg inequality with ideal non-invasive measurements, https://doi.org/10.1038/ncomms1614 journal journal Nat. Commun. volume 3, pages 606 (year 2012)NoStop [Emary(2013)]Emary13 author author C. Emary, title title Decoherence and maximal violations of the Leggett-Garg inequality, https://doi.org/10.1103/PhysRevA.87.032106 journal journal Phys. Rev. A volume 87, pages 032106 (year 2013)NoStop [Mendoza-Arenas et al.(2019)Mendoza-Arenas, Gómez-Ruiz, Rodríguez, and Quiroga]Arenas19 author author J. J. Mendoza-Arenas, author F. J. Gómez-Ruiz, author F. J. Rodríguez, and author L. Quiroga, title title Enhancing violations of Leggett-Garg inequalities in nonequilibrium correlated many-body systems by interactions and decoherence, https://doi.org/10.1038/s41598-019-54121-1 journal journal Sci. Rep. volume 9, pages 17772 (year 2019)NoStop [Matsumura et al.(2022)Matsumura, Nambu, and Yamamoto]Matsumura22 author author A. Matsumura, author Y. Nambu, and author K. Yamamoto, title title Leggett-Garg inequalities for testing quantumness of gravity, https://doi.org/10.1103/PhysRevA.106.012214 journal journal Phys. Rev. A volume 106, pages 012214 (year 2022)NoStop [Bose et al.(2018)Bose, Home, and Mal]Bose18 author author S. Bose, author D. Home, and author S. Mal, title title Nonclassicality of the harmonic-oscillator coherent state persisting up to the macroscopic domain, https://doi.org/10.1103/PhysRevLett.120.210402 journal journal Phys. Rev. Lett. volume 120, pages 210402 (year 2018)NoStop [Bose et al.(2017)Bose, Mazumdar, Morley, Ulbricht, Toro šš, Paternostro, Geraci, Barker, Kim, and Milburn]Bose17 author author S. Bose, author A. Mazumdar, author G. W. Morley, author H. Ulbricht, author M. Toro šš, author M. Paternostro, author A. A. Geraci, author P. F. Barker, author M. S. Kim, and author G. Milburn, title title Spin entanglement witness for quantum gravity, https://doi.org/10.1103/PhysRevLett.119.240401 journal journal Phys. Rev. Lett. volume 119, pages 240401 (year 2017)NoStop [Marletto and Vedral(2017)]Marletto17 author author C. Marletto and author V. Vedral, title title Gravitationally induced entanglement between two massive particles is sufficient evidence of quantum effects in gravity, https://doi.org/10.1103/PhysRevLett.119.240402 journal journal Phys. Rev. Lett. volume 119, pages 240402 (year 2017)NoStop [Emary et al.(2012)Emary, Lambert, and Nori]emary_leggett-garg_2012 author author C. Emary, author N. Lambert, and author F. Nori, title title Leggett-Garg inequality in electron interferometers, journal volume 86, https://doi.org/10.1103/PhysRevB.86.235447 Phys. Rev. B volume 86 (year 2012) NoStop [Englert(1996)]Englert96 author author B.-G. Englert, title title Fringe visibility and which-way information: An inequality, https://doi.org/10.1103/PhysRevLett.77.2154 journal journal Phys. Rev. Lett. volume 77, pages 2154 (year 1996)NoStop [Rauch and Werner(2000)]RauchBook author author H. Rauch and author S. A. Werner, @noop title Neutron Interferometry (publisher Clarendon Press, Oxford, year 2000)NoStop [Klepp et al.(2014)Klepp, Sponar, and Hasegawa]klepp2014fundamental author author J. Klepp, author S. Sponar, and author Y. Hasegawa, title title Fundamental phenomena of quantum mechanics explored with neutron interferometers, https://doi.org/10.1093/ptep/ptu085 journal journal Prog. Theor. Exp. Phys volume 2014, (year 2014)NoStop [Sponar et al.(2021)Sponar, Sedmik, Pitschmann, Abele, and Hasegawa]Sponar21 author author S. Sponar, author R. I. P. Sedmik, author M. Pitschmann, author H. Abele, and author Y. Hasegawa, title title Tests of fundamental quantum mechanics and dark interactions with low-energy neutrons, https://doi.org/10.1038/s42254-021-00298-2 journal journal Nat. Rev. Phys volume 3, pages 309 (year 2021)NoStop [Denkmayr et al.(2017)Denkmayr, Geppert, Lemmel, Waegell, Dressel, Hasegawa, and Sponar]Denkmayr17 author author T. Denkmayr, author H. Geppert, author H. Lemmel, author M. Waegell, author J. Dressel, author Y. Hasegawa, and author S. Sponar, title title Experimental demonstration of direct path state characterization by strongly measuring weak values in a matter-wave interferometer, https://doi.org/10.1103/PhysRevLett.118.010402 journal journal Phys. Rev. Lett. volume 118, pages 010402 (year 2017)NoStop [Wagner et al.(2021)Wagner, Kersten, Danner, Lemmel, Pan, and Sponar]Wagner21 author author R. Wagner, author W. Kersten, author A. Danner, author H. Lemmel, author A. K. Pan, and author S. Sponar, title title Direct experimental test of commutation relation via imaginary weak value, https://doi.org/10.1103/PhysRevResearch.3.023243 journal journal Phys. Rev. Research volume 3, pages 023243 (year 2021)NoStop [Geppert-Kleinrath et al.(2018)Geppert-Kleinrath, Denkmayr, Sponar, Lemmel, Jenke, and Hasegawa]Geppert18 author author H. Geppert-Kleinrath, author T. Denkmayr, author S. Sponar, author H. Lemmel, author T. Jenke, and author Y. Hasegawa, title title Multifold paths of neutrons in the three-beam interferometer detected by a tiny energy kick, https://doi.org/10.1103/PhysRevA.97.052111 journal journal Phys. Rev. A volume 97, pages 052111 (year 2018)NoStop [Lemmel et al.(2022)Lemmel, Geerits, Danner, Hofmann, and Sponar]Lemmel2022 author author H. Lemmel, author N. Geerits, author A. Danner, author H. F. Hofmann, and author S. Sponar, title title Quantifying the presence of a neutron in the paths of an interferometer, https://doi.org/10.1103/PhysRevResearch.4.023075 journal journal Phys. Rev. Research volume 4, pages 023075 (year 2022)NoStop [ILL et al.(2021)Sponar, Kreuzgruber, and Lemmel]S18data author author Stephan Sponar, author Elisabeth Kreuzgruber, and author Hartmut Lemmel, @noop title Leggett-Garg Inequality, (year 2019), note https://doi.ill.fr/10.5291/ILL-DATA.CRG-2643 https://doi.ill.fr/10.5291/ILL-DATA.CRG-2643NoStop [Petrascheck and Rauch(1984)]petrascheck1984 author author D. Petrascheck and author H. Rauch, title title Multiple Laue rocking curves, @noop https://doi.org/10.1107/S0108767384000878journal journal Acta Crystallogr. A volume 40, pages 445 (year 1984)NoStop
http://arxiv.org/abs/2307.04550v2
20230710132923
Gradient Surgery for One-shot Unlearning on Generative Model
[ "Seohui Bae", "Seoyoon Kim", "Hyemin Jung", "Woohyung Lim" ]
cs.LG
[ "cs.LG", "cs.AI" ]
[ Gradient Surgery for One-shot Unlearning on Generative Model equal* Seohui Baecomp Seoyoon Kimcomp Hyemin Jungcomp Woohyung Limcomp compLG AI Research, Seoul, South Korea Seohui [email protected] Woohyung [email protected] deep unlearning, generative model, privacy 0.3in ] Recent regulation on right-to-be-forgotten emerges tons of interest in unlearning pre-trained machine learning models. While approximating a straightforward yet expensive approach of retrain-from-scratch, recent machine unlearning methods unlearn a sample by updating weights to remove its influence on the weight parameters. In this paper, we introduce a simple yet effective approach to remove a data influence on the deep generative model. Inspired by works in multi-task learning, we propose to manipulate gradients to regularize the interplay of influence among samples by projecting gradients onto the normal plane of the gradients to be retained. Our work is agnostic to statistics of the removal samples, outperforming existing baselines while providing theoretical analysis for the first time in unlearning a generative model. § INTRODUCTION Suppose a user wants to get rid of his/her face image anywhere in your facial image generation application - including the database and the generative model on which it is trained. Is the expensive retrain-from-scratch the only solution for this kind of request? As the use of personal data has been increased in training the machine learning models for online service, meeting individual demand for privacy or the rapid change in the legislation of General Data Protection Registration (GDPR) is inevitable to ML service providers nowadays. This request on `Right-To-Be-Forgotten (RTBF)' might be a one-time or in-series, scaling from a feature to a number of tasks, querying single instance to multiples. A straightforward solution for unlearning a single data might be to retrain a generative model from scratch without data of interest. This approach, however, is intractable in practice considering the grand size and complexity of the latest generative models  <cit.> and the continual request for removal. Unlearning, thereafter, aims to approximate this straightforward-yet-expensive solution of retrain-from-scratch time and computation efficiently. First-order data-influence-based approximate unlearning is currently considered the state-of-the-art approach to unlearning machine learning models in general. Grounded by the notion of data influence <cit.>, a simple one-step Newton's update certifies sufficiently small bound between retrain-from-scratch <cit.>. Nonetheless, those relaxations are infeasible to the non-convex deep neural networks (generative model) where the gap is not certifiably bounded and the process of computing the inverse of hessian is intractable. Several recent works also have affirmed that these relaxed alternatives perform poorly on deep neural networks <cit.> and even that on generative models have not been explored yet. Contribution In this work, we propose a novel one-shot unlearning method for unlearning samples from pre-trained deep generative model. Relaxing the definition of influence function on parameters in machine unlearning  <cit.>, we focus on the influence of a single data on the test loss of the others and propose a simple and cost-effective method to minimize this inter-dependent influence to approximate retrain-from-scratch. We summarize our contributions as follows: * We propose to annul the influence of samples on generations with simple gradient manipulation. * Agnostic to removal statistics and thus applied to any removals such as a single data, a class, some data feature, etc. * Grounded by a theoretical analysis bridging standard machine unlearning to generative model. § GRADIENT SURGERY FOR ONE-SHOT DATA REMOVALS ON GENERATIVE MODEL Notations Let D={x_i}_i=1^N⊆𝒳 be the training data where x_i ∈𝒳 is input. Let D_f ⊆ D be a subset of training data that is to be forgotten (i.e. forget set) and D_r = D ∖ D_f be remaining training data of which information we want to retain. Recall that the goal of unlearning is to approximate the deep generative model retrained from scratch with only D_r, which we denote as f_θ^* parameterized by θ^*. Then, our goal is to unlearn D_f ⊆ D from a converged pre-trained generator f_θ̂ by updating the parameter θ̂→θ^-, where θ^- represents the updated parameters obtained after unlearning. Proposed method Given a generative model that models the distribution of training data p(D), a successful unlearned model that unlearns D_f would be what approximates p(D_r), the distribution of D_r, as if it had never seen D_f. The only case where the unlearned model generates samples similar to x∈ D_f is when p(D_f) and p(D_r) happen to be very close from the beginning. Under this goal, a straight-forward objective given the pre-trained model approximating p(D) is to make the output of generation to deviate from p(D_f), which could be simply formulated as the following: max_θ𝔼_(x,y)∼ D_fℒ(θ, x, y) where ℒ denotes training loss (e.g. reconstruction loss). Meanwhile, assume we could define the influence of a single data on the weight parameter and generation result. Then, unlearning this data would be by simply updating the weight parameter in a direction of removing the data influence. Toward this, we start with defining the data influence on weight parameters and approximates to feasible form as introduced in <cit.>: Given upweighting z by some small ϵ and the new parameters θ̂_ϵ,z*argmin_θ∈Θ1/n∑_i=1^nℒ(z_i, θ) + ϵℒ(z,θ), the influence of upweighting z on the parameter θ̂ is given by I_up,param(z) dθ̂_ϵ,z/dϵ|_ϵ=0 -H_θ̂^-1∇_θ L(z,θ̂) where H_θ̂ = 1/n∑_i=1^n∇_θ^2 L(z_i, θ̂) is the Hessian and is positive definite (PD) by assumption. By forming a quadratic approximation to the empirical risk around θ̂, a data influence on the weight parameter is formulated as a single Newtons step (See details in Appendix of <cit.>), which is consistent with the objective we have mentioned in Equation <ref>. Although numerous works have verified that this data influence-based approach works well in shallow, discriminative models <cit.>, we cannot apply this directly to our generative model due to intractable computation and lack of guarantees on bounds. To address this problem, we re-purpose our objective to minimize the data influence on generation. Grounded by recent works <cit.>, we find that we could enjoy this on generative model simply by diminishing the gradient conflict as follows: Reducing the influence of samples z∈ D_f in training data with regard to test loss is formulated as: I^'_up,loss(D_f,z') → 0, which is equivalent to ∇_θℒ(z',θ̂)^T ∑_z ∈ D_f∇_θℒ(z,θ̂) → 0 where z'∈ D_r in our scenario. Informally, we could achieve this by alleviating the conflict between two gradients ∇_θℒ(z',θ̂) and ∇_θℒ(z,θ̂), resulting in diminishing the inner product of two gradients. This reminds us of a classic approach of gradient manipulation techniques for conflicting gradients in multi-task learning scenario  <cit.>. Specifically, we project a gradient of forget sample x_f ∈ D_f onto normal plane of a set of retain samples x_r ∈ D_r to meet ℐ_up,loss(x_f, x_r)=0. This orthogonal projection manipulates the original gradient of forget sample 𝐠_f=∇ℒ_f to the weight parameter to which sufficiently unlearns a sample x_f ∈ D_f: g_f = g_f - g_f ·g_r/g_r^2g_r. Then, the unlearned model θ^- is obtained after the following gradient update: θ^- = θ̂ - ηg_f. § EXPERIMENTS We verify our idea under numerous data removal requests. Note that measuring and evaluating a generative model to unlearn a single data is non-trivial. Even comparing pre-trained generative models trained with a particular data over without simply by looking at the output of training (e.g. generated image, weight) is intractable in case of a deep generative model to the best of our knowledge <cit.>. To make the problem verifiable, in this work, we experiment to unlearn a group of samples sharing similar statistics in the training data - either belonging to a particular class or that has a distinctive semantic feature. In this case, one can evaluate the output of the generation by measuring the number of samples including that class or a semantic feature; a successfully unlearned model would generate nearly zero number of samples having these features. Although we are not able to cover unlearning a single data in this work, note that in essence, our method could successfully approximate the generative model trained without a single data seamlessly, and we look forward to exploring and adjusting a feasible evaluation on this scenario in the near future. §.§ Experimental Setup Scenarios We unlearn either a whole class or some notable feature from a group of samples. In the experiment, we use a subset of MNIST <cit.> with samples of classes 1,3,8 and 64x64 CelebA <cit.> to train and unlearn vanilla VAE <cit.>. Evaluation We evaluate our method under the following three criteria: a privacy guarantee, utility guarantee, and cost. Privacy guarantee includes feature ratio ( fratio), a ratio of images including the target feature (See details in Appendix <ref>). Utility guarantee includes Frechet Inception Distance (FID), a widely used measure for generation quality. Cost includes a total execution time (Time) which should be shorter than retrain-from-scratch. A successfully unlearned model would show near-zero on feature ratio, the same IS, FID score as the initial pre-trained model (BEFORE), and the lowest possible execution time. Given the legal impact and the goal of unlearning, note that guaranteeing privacy is prioritized the highest. §.§ Result on Pre-trained Generative Model Quantitative Result We run the proposed method on pre-trained VAE to remove unlearning group D_f (e.g. class 1 or male, respectively) and evaluate them as follows (Table <ref>) Starting from the pre-trained model (BEFORE) our method unlearns the target D_f with a large decrease on fratio by 65% to 70% while keeping the time cost of unlearning ≤ 5% of retrain-from-scratch. All the while, our method still keeps a decent utility performance. Comparing the baselines, our method shows the best in privacy - the prioritized metric - through all experiments. Note that the feature ratio of gradient ascent in the CelebA experiment (feature ratio-CelebA-Grad.Ascnt) was omitted because the generated samples are turned out to be noisy images and thus the evaluation result of pre-trained classifier cannot be accepted. Also, note that although baselines show better performance in terms of utility and cost, they don't show near-best score on privacy guarantee. Qualitative Result We further validate our method by comparing the generated images before and after the proposed unlearning algorithm. As in Figure <ref>, no class 1 samples are observed after unlearning class 1, meaning that our method successfully meets the request of unlearning class 1, which aligns with the quantitative result where the ratio of samples with class 1 is reduced from 34.3% to ≤ 15% as in Table <ref>. The output of image generation is fair where 3 and 8 are decently distinguishable through one's eyes, although it is certain that some examples show some minor damaged features, which are in the same line as a decrease in IS and an increase in FID score. Note that the ultimate goal of unlearning is to meet the privacy guarantee while preserving the utility of pre-training, which are remained as our next future work. § CONCLUSION In this work, we introduce a novel theoretically sounded unlearning method for the generative method. Inspired by the influence of the sample on the others, we suggest a simple and effective gradient surgery to unlearn a given set of samples on a pre-trained generative model and outperform the existing baselines. Although we don't experiment to unlearn single data due to a lack of ground evaluation on the uniqueness of the particular data, we leave it as future work emphasizing that our method could also be applied to this scenario. Furthermore, it would be interesting to verify our ideas on various privacy-sensitive datasets. Nonetheless, our work implies the possibility of unlearning a pre-trained generative model, laying the groundwork for privacy handling in generative AI. bishop1992exact goodfellow2013multi fu2022knowledge liu2021federaser gupta2021adaptive bourtoule2021machine zhang2022prompt icml2023 § EXPERIMENTAL DETAILS §.§ Setup Architecture In this experiment, we use vanilla VAE <cit.> with encoders of either stack of linear(for MNIST experiment) or convolutional(for CelebA experiment) layers. Although we verify our result on VAE, note that our method can be applied to any variational inference based generative model such as <cit.>. Baseline We compare our experimental results with the following two baselines. One is a recently published, first and the only unlearning work on generative model <cit.> (FU) to unlearn by feeding a surrogate model with projected latent vectors. We reproduce FU and follow the hyperparameter details (e.g. unlearning epochs 200 for MNIST) as in the original paper. The other is a straight-forward baseline (Grad.Ascnt.) which updates the gradient in a direction of maximizing the reconstruction loss on forget, which is equivalent to meeting e.g. Objective  <ref> without gradient surgery. Note that we keep the same step size when unlearning with these three different methods (including ours) for fair comparison. Training details We use Adam optimizer with learning rate 5e-04 for MNIST experiment and 1e-05 for CelebA experiment. We update the parameter only once (1 epoch) for removals, thus named our title 'one-shot unlearning'. All experiments are three times repeated. §.§ How to Evaluate Feature Ratio We first prepare a classification model that classifies the image having a target feature from the remains. In order to obtain a highly accurate classifier, we search for the best classifier which shows over 95% accuracy. In the experiment, we use AllCNN <cit.> to classify class 1 over the other in MNIST with 1,3,8 (MNIST381), and ResNet18 <cit.> to classify male over female on CelebA. After unlearning, we generate 10000 samples from the generator and feed the sample to the pre-trained classifier. Assuming that the classifier classifies the image well, the prediction result would the probability that the generated output contains the features to be unlearned. § DEFINITIONS AND PROOF FOR THEORETICAL ANALYSIS In  <cit.> and  <cit.>, an influence of sample z on weight parameter is defined as the product of its gradient and inverse of hessian. Moreover, an influence of sample z to test loss of sample z' defined in as following: (Equation 2 from <cit.>) Suppose up-weighting a converged parameter θ̂ by small ϵ, which gives us new parameters θ̂_ϵ,z*argmin_θ∈Θ1/n∑_i=1^nℒ(z_i, θ) + ϵℒ(z,θ). The influence of up-weighting z on the loss at an arbitrary point z' against has a closed-form expression: ℐ_up,loss(z, z') dℒ(z',θ̂_ϵ,z)/dϵ|_ϵ=0 = ∇_θℒ(z',θ̂)^⊤ H_θ̂^-1∇_θℒ(z,θ̂) where H_θ̂1/n∑_i=1^n∇_θ^2ℒ(z_i, θ̂) is the Hessian and is positive definite (PD) by assumption on convex and Lipschitz continuity of loss ℒ. (Theorem <ref> from Section <ref>) Reducing the influence of samples z∈ D_f in training data with regard to test loss is formulated as: I^'_up,loss(D_f,z') → 0, which is equivalent to ∇_θℒ(z',θ̂)^T ∑_z ∈ D_f∇_θℒ(z,θ̂) → 0 where z'∈ D_r in our scenario. The second-order influence of D_f, ℐ^(2)_up, param, is formulated as sum of first-order influence ℐ^(1)_up, param and ℐ^' _up, param, which captures the dependency of the terms in 𝒪(ϵ^2) on the group influence is defined as following: ℐ^'_up, param(D_f,z') = 𝒜 H_θ̂^-1∑_z ∈ D_f∇_θℒ(z,θ̂) where 𝒜 = p/1-p(I-(∇^2 L(θ^*))^-11/|𝒰|∑_z∈𝒰∇^2 l(h_θ^*(z))) (from <cit.>). The influence of samples in D_f on the test loss of z' can be formulated as: ℐ_up, loss(D_f,z') = ∇_θℒ(z,θ̂)^T ℐ_up, param(D_f) which can be equivalently applied to all orders of ℐ including ℐ^(1), ℐ^(2), ℐ^'. Then, ℐ^'_up, loss(D_f,z') = 0 is now reduced to ∇_θℒ(z,θ̂)^T 𝒜 H_θ̂^-1∑_z ∈ D_f∇_θℒ(z,θ̂) = 0 which satisfies the right-hand side of Theorem <ref> where 𝒜 and H_θ̂^-1 are negligible.
http://arxiv.org/abs/2307.06048v1
20230712100022
Online Inventory Problems: Beyond the i.i.d. Setting with Online Convex Optimization
[ "Massil Hihat", "Stéphane Gaïffas", "Guillaume Garrigos", "Simon Bussy" ]
math.OC
[ "math.OC", "cs.LG", "stat.ML" ]
Asymmetry of 2-step Transit Probabilities in 2-Coloured Regular Graphs [ ====================================================================== We study multi-product inventory control problems where a manager makes sequential replenishment decisions based on partial historical information in order to minimize its cumulative losses. Our motivation is to consider general demands, losses and dynamics to go beyond standard models which usually rely on newsvendor-type losses, fixed dynamics, and unrealistic i.i.d. demand assumptions. We propose MaxCOSD, an online algorithm that has provable guarantees even for problems with non-i.i.d. demands and stateful dynamics, including for instance perishability. We consider what we call non-degeneracy assumptions on the demand process, and argue that they are necessary to allow learning. § INTRODUCTION An inventory control problem is a problem faced by an inventory manager that must decide how much goods to order at each time period to meet demand for its products. The manager's decision is driven by the will to minimize a certain regret, which often penalizes missed sales and storage costs. It is a standard problem in operations research and operations management, and the reader unfamiliar with the topic can find a precise description of this problem in Section <ref>. The classical literature of inventory management focuses on optimizing an inventory system with complete knowledge of its parameters: we know in advance the demands, or the distribution they will be drawn from. Many efforts have been put into characterizing the optimal ordering policies, and providing efficient algorithms to find them. See e.g. the Economic Order Quantity model <cit.>, the Dynamic Lot-Size model <cit.> or the newsvendor model <cit.>. Nevertheless, in many applications, the parameters of the inventory system are unknown. In this case, the manager faces a joint learning and optimization problem that is typically framed as a sequential decision making problem: the manager bases its replenishment decisions on data that are collected over time, such as past demands (observable demand case) or past sales (censored demand case). The early attempts to solve these online inventory problems employed various techniques and provided only weak guarantees or no guarantees at all <cit.>. With the recent advances in online learning frameworks such as online convex optimization (OCO), bandits, or learning with expert advice, the literature of learning algorithms for inventory problems took a great leap forward. There is currently a growing body of research aiming at solving various online inventory problems while providing strong theoretical guarantees in the form of regret bounds <cit.>. However, these works rely on mathematically convenient but unrealistic assumptions. The most common one being that the demands are assumed to be independent and identically distributed (i.i.d.) across time, which rules out correlations and nonstationarities that are common in real-world scenarios. Furthermore, these works focus on specific cost structures (typically the newsvendor cost) and inventory dynamics (like lost sales models with nonperishable products), which we detail in Section <ref>. The main goal of this paper is to go beyond these restrictions and consider general demand processes, losses and dynamics, in order to provide numerical methods backed with theoretical guarantees compatible with real-world problems. To do so, we recast the online inventory problem into a new framework called Online Inventory Optimization (OIO), which extends OCO. Our main contribution is a new algorithm called MaxCOSD, which can be seen as a generalization of the Online Subgradient Descent method. It solves OIO problems with provable theoretical guarantees under minimial assumptions (see Section <ref>). Here is an informal version of our main statement: Consider an OIO problem that satisfies convexity and boundedness assumptions. Assume further that demands are not degenerate (see Assumption <ref>). Then, running MaxCOSD (see Algorithm <ref>) with adequate adaptive learning rates gives an optimal O(√(T)) regret, both in expectation and in high probability. Our main assumption is a non-degeneracy hypothesis on the demand process, which we present and discuss in Section <ref>. This assumption generalizes typical hypotheses made in the inventory literature, while not requiring the demand to be i.i.d.. We also show that this assumption is sharp, in the sense that the OIO cannot be solved without such assumption. Finally, in Section <ref> we present numerical experiments on both synthetic and real-world data that validate empirically the versatility and performances of MaxCOSD. The supplementary material gathers the proofs of all our statements. This paper helps to bridge the gap between OCO and inventory optimization problems, and we hope that it will raise awareness of OCO researchers to this family of under-studied problems, while being of importance to the industry. § A GENERAL MODEL FOR ONLINE INVENTORY PROBLEMS In this section, we present a new but simple model allowing to study a large class of online inventory problems. Then, in a series of remarks we discuss particular instances of these problems and the limitations of our model. §.§ Description of the model and main assumptions In the following, n∈={1,2,…} refers to the number of products and ⊂_+^n denotes the feasible set. An online inventory optimization (OIO) problem is a sequential decision problem where an inventory manager interacts with an environment according to the following protocol. First, the environment sets the initial inventory state to zero (x_1=0∈^n), and chooses (possibly random) demands d_t ∈^n_+ and losses ℓ_t:^n→ for every time period t∈. Then, the interactions begin and unfold as follows, for every time period t∈: * The manager observes the inventory state x_t∈^n, where x_t,i encodes the quantity of the i^th product available in the inventory. * The manager raises this level by choosing an order-up-to level y_t∈ which satisfies the feasibility constraint: y_t ≽ x_t. Then, the manager receives instantaneously y_t,i-x_t,i≥ 0 units of the i^th product. * The manager suffers a loss ℓ_t(y_t) and observes a subgradient g_t∈∂ℓ_t(y_t). * The environment updates the inventory state by choosing x_t+1∈^n satisfying the following inventory dynamical constraint: x_t+1≼y_t-d_t. The goal of the manager is to design an online algorithm that produces feasible order-up-to levels y_t which minimizes the cumulative loss suffered using past observations (past inventory states and subgradients). Let us emphasize here the fact that demands and losses are not directly observable. Throughout the paper, we make the following assumptions on the feasible set and the losses. [Convex and bounded problem]   * (Convex and bounded constraint) The feasible set is closed, convex, nonnegative (⊂^n_+) and bounded: 𝒴≤ D for some D≥0. * (Convex losses) For every t ∈, the loss function ℓ_t is convex. * (Uniformly bounded subgradients) There exists G >0 such that, for all t ∈, y∈ and g∈∂ℓ_t(y) we have g_2 ≤ G. Apart from the non-negativity assumption ⊂^n_+ which is specific to inventory problems, Assumption <ref> is very common in online convex optimization <cit.>. Given a horizon T∈, we measure the performances of an algorithm by the usual notion of regret R_T which is defined as the difference between the cumulative loss incurred by the algorithm and that incurred by the best feasible constant strategy[In the context of inventory problems, these constant strategies are known under the name of (stationary) base-stock policies, S-policies, or order-up-to policies. See e.g. <cit.>.]: R_T = ∑_t=1^Tℓ_t(y_t) - inf_y∈∑_t=1^Tℓ_t(y). Observe that R_T is possibly random, so we call its expectation the expected regret R_T. It is a simple exercise to see that under Assumption <ref> we always have R_T≤ DGT. See Lemma <ref> in the supplementary material. Thus, our goal is to design algorithms with sublinear regret with respect to T, achieveing 𝔼[R_T] ≤ o(T). Note that some authors consider the equivalent notion of averaged regret (1/T)R_T. In this context, we also talk about no-regret algorithms. §.§ Description of standard inventory models and limitations of our model Demands are modeled by a stochastic process (d_t)_t∈⊂_+^n with fixed in advance distribution. We make no assumptions of regularity, stationarity or independence. Thus, our model accommodates both i.i.d. <cit.> and deterministic demands <cit.>, while also allowing for correlations and nonstationarities that appear for instance in autoregressive models. However, we rule out strategic behaviors: this is not a game-theoretic model. Inventory states x_t are constrained by the inventory dynamical constraint (<ref>) which links states, demands, and order-up-to levels. This constraint resembles the notion of partial perishability introduced in <cit.> which imposes further that inventory states are non-negative. In the following, we present some standard dynamics that conform to our model. We warn the reader that we try here to simplify the vocabulary used in the scattered inventory literature, and that some authors <cit.> may refer to these dynamics using different terms[For instance the stateless dynamic is usually referred to as the "perishable" setting, and lost sales and backlogging dynamics may both be referred to as "nonperishable" settings.]. * Stateless dynamic. In stateless inventory problems, we assume that no product is carried over from one period to the other, i.e. x_t = 0. Observe that due to the non-negativity assumption ⊂_+^n, the feasibility constraint (<ref>) is satisfied by any choice of y_t∈ in stateless problems. This means that stateless inventory problems coincide with the usual online convex optimization (OCO) framework <cit.>. See <ref> for a discussion on the relationship between OCO and OIO. Any dynamic which is not stateless is called stateful, see below. * Backlogging dynamic. In backlogging inventory problems, excess demand stays on the books until it is satisfied and inventory leftovers are carried over to the next period. It corresponds to set x_t+1=y_t-d_t. Notice that in this case the inventory state may be negative to represent backorders. This kind of dynamic has been widely studied in the context of classical inventory theory due to its linear nature, see e.g. <cit.>. * Lost sales dynamic. Assume that products are nonperishable, excess demand is lost and inventory leftovers are carried over to the next period. We refer to this case as the lost sales dynamic which corresponds to set x_t+1=y_t-d_t. See e.g. <cit.>. * Perishable dynamic. In perishable inventory systems <cit.>, newly ordered products are fresh units that have a fixed usable lifetime. To model such a dynamic, it is necessary to track the entire age distribution of the on hand inventory and to specify the stockout type (lost sales or backlogging) and the issuing policy (i.e. how items are issued to meet demand). For instance, <cit.> describes a perishable setting modeling a single product with first-in-first-out issuing policy, which satisfies our inventory dynamical constraint (<ref>). All those dynamics are what we call deterministic dynamics, in the sense that they take the form: (∀ t ∈ℕ) x_t+1 = X_t(y_1,d_1,…,y_t, d_t), where X_t: (×_+^n)^t→^n is a fixed in advance function satisfying X_t(y'_1,d'_1,…,y'_t,d'_t) ≼y'_t-d'_t for all realizations (ℓ'_t)_t∈, (d'_t)_t∈ and (y'_t)_t∈. The feasible set models constraints on the order-up-to levels. Typical choices include box constraints =∏_i=1^n [y_i, y_i] or capacity constraints <cit.> of the form = {y ∈ℝ^n_+ | ∑_i=1^n y_i ≤ M} which both satisfy Assumption <ref>.<ref>. Due to a lack of convexity, our model does not allow for discrete sets like ⊂{0,1,…}^n which appear for instance in <cit.>. Losses (ℓ_t)_t∈ are random functions drawn before the interactions start. As for demands, this allows for i.i.d. losses and deterministic losses. Most interestingly, losses can depend on the demands. This allows to consider the newsvendor loss, which writes ℓ_t(y) = c(y,d_t) with: c(y,d)= ∑_i=1^n( h_iy_i-d_i+p_id_i-y_i). Here h_i∈_+ and p_i∈_+ are respectively the unit holding cost (a.k.a. overage cost) and unit lost sales penality cost (a.k.a. underage cost) of product i∈[n]. The newsvendor loss satisfy assumptions <ref>.<ref> and <ref>.<ref> with G=√(n)max_i∈[n]max{h_i,p_i}. Our model also accommodates the newsvendor loss with time-varying unit cost parameters, as long as these remain bounded. Because the losses are drawn before-hand, our model usually does not allow to incorporate costs that depend explicitly on the inventory states such as purchase costs, outdating costs, or fixed costs. However there are exceptions: when the dynamic is lost sales, purchase costs can be included into our model, by considering the newsvendor loss onto which a cost transformation is applied (a.k.a explicit formulation <cit.>). See also <cit.> and the references therein. To handle arbitrary losses, we required in our model that in addition to inventory states x_t, at least one subgradient g_t∈∂ℓ_t(y_t) is revealed at each period. In the case of the newsvendor loss, this is less demanding than both the observable demand setting and the censored demand setting <cit.>. In the former, the demand d_t is revealed instead of a subgradient g_t, meaning that the manager has complete information on the newsvendor loss ℓ_t=c(·,d_t). In the latter, the sale s_t := min{y_t,d_t} is revealed, allowing the manager to compute a subgradient through the following formula (see Lemma <ref> in Appendix <ref>): (h_iy_t,i>s_t,i-p_iy_t,i=s_t,i)_i∈[n]∈∂ℓ_t(y_t). In this case, we see that no randomization is involved in the choice of the subgradient. This means that the subgradient selection is deterministic, in the sense that: (∀ t ∈ℕ) g_t = Γ_t(ℓ_t, y_t), where Γ_t:^^n×→^n is a fixed in advance function such that Γ_t(ℓ'_t,y'_t)∈∂ℓ'_t(y'_t) for all realizations (ℓ'_t)_t∈ and (y'_t)_t∈. In this final remark, we would like to point out that OIO is a novel and strict extension of OCO, that is, OIO cannot be casted into OCO or one of its known extensions. More details on this are provided in Appendix <ref>. § PARTIAL RESULTS FOR SIMPLE INVENTORY PROBLEMS Previous works on online inventory problems mainly focused on two settings: stateless inventory problems and stateful inventory problems with i.i.d. demands. Both settings are discussed here. §.§ Stateless inventory problems In the literature of stateless inventory problems, arbitrary deterministic demands have already been considered. This has been done for instance in <cit.>, which assume demand is observable and rely on the "learning with expert advice" framework. On the other hand <cit.> is the first work that considered the stateless setting with censored demand. See also <cit.> which tackled these problems in the discrete case, by reducing them to partial monitoring (an online learning framework that generalizes bandits). All these works achieve a O(√(T)) regret (up to logarithmic terms), but are restricted to the newsvendor cost structure. In our work, we aim at solving inventory problems with arbitrary demands and losses. Recall that under Assumption <ref>, stateless inventory problems coincide with the standard OCO framework <cit.> since feasibility (<ref>) is trivially verified (see Remark <ref>). Thus, a natural choice to solve stateless inventory problems is the Online Subgradient Descent (OSD) method, which we recall in Algorithm <ref>. Classical regret analysis of OSD (this is essentially proven in <cit.>, see also Corollary <ref> in the appendix) shows that taking decreasing learning rates of the form η_t=γ D/(G√(t)) where γ>0, leads to a regret bound R_T = O(GD √(T)). It must be noted that the O(√(T)) scaling is optimal under Assumption <ref> (see e.g. <cit.> or <cit.>). §.§ Stateful i.i.d. inventory problems When non-trivial dynamics are involved, inventory problems are much more complex and have mainly been studied in the i.i.d. demands framework. We review here the literature of joint learning and inventory control with censored demand and refer to the recent review of <cit.> for further references. We stress that all those papers obtain rates for the pseudo-regret, a lower bound of the expected regret which we consider in this paper (see Appendix <ref> for more details). The seminal work of <cit.> is the first that derives regret bounds for the single-product i.i.d. newsvendor case under censored demand, it is also the sole work that considers general dynamics through their notion of partial perishability <cit.>. They were able to design an algorithm called Adaptive Inventory Management (AIM) based on a subgradient descent and dynamic projections onto the feasibility constraint (<ref>) which achieves a O(√(T)) pseudo-regret. Their main assumption is that the demands should not be degenerate in the sense that d_1>0 and that the manager should know a lower bound 0 < ρ < d_1. This lower bound is then used in AIM to tune adequately the learning rate of the subgradient descent. Their analysis is based on results from queuing theory which rely heavily on the i.i.d. assumption. <cit.> designed the Data-Driven Multi-product (DDM) algorithm which extend the AIM method of <cit.> to the multi-product case under capacity constraints. They also derived a O(√(T)) pseudo-regret bound by assuming further that demands are pairwise independent across products and that d_1,i>0 for all i∈[n] amongst other regularity conditions. <cit.> tackled the case of single-product i.i.d. perishable inventory systems with outdating costs. They designed the Cycle-Update Policy (CUP) algorithm which updates the order-up-to level according to a subgradient descent, but only when the system experiences a stockout, i.e. when x_t=0, the order-up-to level remains unchanged otherwise. Feasibility is guaranteed using such a policy. However, in order to derive O(√(T)) pseudo-regret they need to ensure that the system experiences frequently stockouts. To do so, they consider a stronger form of non-degeneracy, namely, d_1≥ D>0 where =[0,D]. Variants of the subgradient descent have also been developed in order to achieve O(√(T)) pseudo-regret in inventory systems that includes lead times <cit.> or fixed costs <cit.> which are both beyond the scope of our model. To summarize, an optimal O(√(T)) rate for the pseudo-regret is achievable in many stateful inventory problems, under the i.i.d. assumption. To prove so, most of the cited works developed specific variants of the subgradient descent that accommodates the specific dynamic at play. We will show in Section <ref> that this optimal O(√(T)) rate can be achieved by our algorithm MaxCOSD when applied to general inventory problems, with no i.i.d. assumption on the demand. § MAXCOSD: AN ALGORITHM FOR GENERAL INVENTORY PROBLEMS In this section, we introduce and study our main algorithm: the Maximum Cyclic Online Subgradient Descent (MaxCOSD) algorithm. It is a variant of the subgradient descent where instead of changing the order-up-to levels at every period, updates are done only at certain update periods denoted (t_k)_k∈. These define update cycles _k={t_k,…, t_k+1-1} during which the order-up-to level remains unchanged: y_t=y_t_k for all t∈_k. Update periods are dynamically triggered by verifying, at the beginning of each time period t∈, whether a candidate order-up-to level ŷ_t is feasible or not. This candidate is computed by making a subgradient step in the direction of the subgradients accumulated during the cycle and using the following adaptive learning rates,[It may happen that η_t is undefined due to a denominator that is zero in Eq. (<ref>), in such cases set η_t=0. ] η_t = γ D/√(∑_s=t_k^tg_s_2^2+∑_m=1^k-1∑_s∈𝒯_m g_s_2^2) for all t∈_k. The pseudo-code for MaxCOSD is given in Algorithm <ref>. MaxCOSD is inspired by CUP <cit.>. First, they are both based on cyclical updates but their cycle definition differ. In CUP stockouts trigger updates, whereas MaxCOSD relies directly on the feasibility condition, making its updates more frequent. Also, we use adaptive learning rates inspired by AdaGrad-Norm learning rates <cit.> which allows us to be adaptive to the constant G and obtain high probability regret bounds which are not available for CUP. Finally, and most importantly, the assumptions required by CUP are restrictive: i.i.d. demands, single-product, perishable dynamic and a strong form of a demand non-degeneracy. On the other hand, MaxCOSD performs well under much milder assumptions, which we introduce next. [Uniformly probably positive demand] There exists μ∈(0,1] and ρ >0 such that, for all t∈, almost surely, ∀ i ∈ [n], d_t,i≥ρ| ℓ_1,d_1,…, ℓ_t-1, d_t-1≥μ. In simple settings we recover through Assumption <ref> conditions that already appeared in the literature: * In single-product i.i.d. newsvendor inventory problems, our assumption is equivalent to the existence of ρ>0 such that d_1≥ρ>0, that is, d_1>0>0, or equivalently d_1>0. This is exactly the non-degeneracy assumption required by <cit.> in AIM. * In its multi-product extension, <cit.> assumes also pairwise independence across products. If we rather require mutual independence across products, then, Assumption <ref> rewrites d_1,1>0⋯d_1,n>0>0, thus, our assumption reduces to d_1,i>0 for all i∈[n] which is also required by <cit.> in their algorithm DDM. * We recover an assumption made for CUP <cit.> by requiring ρ=D where =[0,D]. * If the demand is deterministic, then μ=1 and Eq. (<ref>) becomes d_t,i≥ρ for all i∈[n]. * If the demand is discrete, i.e. d_t,i∈{0,1,…} for all t∈,i∈[n], then, we can take ρ=1 and rewrite Eq. (<ref>) as follows: ∃ i∈[n], d_t,i=0 | ℓ_1,d_1,…, ℓ_t-1, d_t-1≤ 1-μ. In addition to Assumption <ref> we also introduce a mild technical condition. [Deterministic dynamic and subgradient selection] Dynamics and subgradient selections are deterministic, see Eq. (<ref>) and Eq. (<ref>). We can now state our main result: under this new set of assumptions MaxCOSD achieves an optimal O(√(T)) regret bound both in expectation and in high probability. Consider an inventory problem, and let assumptions <ref>, <ref> and <ref> hold. Then, MaxCOSD (see Algorithm <ref>) run with y_1∈ and γ>0 is feasible. Furthermore, when γ∈ (0,ρ/D], it enjoys the following regret bounds for all T∈, R_T≤√(2) G D/μ(1/2γ+γ+1)√(T), and for any confidence level δ∈(0,1) we have with probability at least 1-δ, R_T ≤ GD (1/2γ+γ+1)(1+1/μlog(T/δ))√(T). § NON-DEGENERATE DEMANDS ARE NEEDED FOR STATEFUL INVENTORY PROBLEMS Throughout this paper, we have seen instances of OIO which can be solved with sublinear regret rates: stateless OIO (equivalent to OCO), and some stateful OIO. It must be noted that, contrary to OCO, solving those stateful OIO problems required a non-degeneracy assumption on the demand (see Assumption <ref> and Section <ref>). We argue here that such an assumption is necessary for solving stateful OIO. Note that this idea is not new, and was already observed in the conclusion of <cit.>: "To control for the impact of overordering, demands must be bounded away from zero, at least in expectation.". Our contribution is to make this observation formal. Given any feasible deterministic[In general, a deterministic algorithm is defined by fixed in advance functions Y_t:(^n×^n)^t-1→ and outputs y_t=Y_t(g_1,x_2,…,g_t-1,x_t) for all t∈.] algorithm for the single-product lost sales newsvendor inventory problem with observable demand over 𝒴 = [0,D], there exists a sequence of demands such that the regret is linear, i.e. R_T = Θ(T). Proposition <ref> shows that Assumption <ref> is not sufficient to reach sublinear regret in general inventory problems. This is totally unusual from an OCO perspective, and is a specificity of stateful OIO. Furthermore, the above result shows that what prevents us from reaching sublinear rates is not the limited feedback (demands and losses are observable in this example) but rather zero demands. This is why it is necessary to make an assumption preventing demands to be too small, in some sense. Note that one may think of imposing positive demands to circumvent this difficulty, but this is not sufficient. Indeed, a sequence of demands converging too fast to zero can also be problematic. Given any feasible algorithm for the single-product lost sales problem with observable demand over =[0,D] such that y_1∈(0,D], there exists a constant sequence of losses and a sequence of positive demands such that the regret is linear, i.e. R_T = Θ(T). Let us now investigate why degenerated demand becomes a problem when going from stateless to stateful OIO. The main difference between the two is that the feasibility constraint is always trivially satisfied for stateless OIO (see Remark <ref>). Instead, for stateful OIO, we can show that the higher is the demand, the easier it is for the feasibility constraint to be satisfied. Let y,y',d∈_+^n. If y'-y_2 ≤min_i∈[n]d_i, then, y'≽y-d. In particular, given an inventory problem and a time period t∈, taking y_t+1∈ such that y_t+1-y_t_2≤min_i∈[n]d_i,t ensures that y_t+1 is feasible, in the sense that x_t+1≼ y_t+1. The above lemma shows that if y_t+1 is taken close enough from the previous y_t, then the algorithm is feasible. The key point here is that "close enough" is controlled by the demand, meaning that when the demand is closer to zero there are less feasible choices for the manager. In such a case, we understand that it may be impossible to achieve sublinear regret, because the set of feasible choices could be too reduced. The distance between two consecutive decisions can easily be controlled in methods based on subgradient descents, through their learning rates. This is why Lemma <ref> is very helpful in the design of efficient feasible algorithms. It has been employed in the proof of our main result regarding MaxCOSD (Theorem <ref>). In the following, we further illustrate its usefulness by showing that OSD (see Algorithm <ref>) with adequate learning rates is feasible when the demand is uniformly positive. [Uniformly positive demand] There exists ρ>0 such that for all t∈, i∈[n], d_t,i≥ρ. Consider an inventory problem, and let assumptions <ref> and <ref> hold. Then, OSD (see Algorithm <ref>) run with y_1∈ and η_t=γ D/(G√(t)) where γ∈(0,ρ/D], is feasible and satisfies for all T∈ that R_T ≤ (1+2γ)(2γ)^-1 GD √(T). § NUMERICAL RESULTS The goal of the following numerical experiments is to show the versatility and performances of MaxCOSD in various settings. Let us consider the following problems. * Setting 1. Single-product lost sales inventory problem with i.i.d. demands drawn according to Poisson(1). * Setting 2. Single-product perishable inventory problem with a lifetime of 2 periods and i.i.d. demands drawn according to Poisson(1). * Setting 3. Multi-product lost sales inventory problem with n=100 and capacity constraints. Demands are i.i.d. and drawn independently across products according to Poisson(λ_i) where the intensities λ_i have been drawn independently according to Uniform[1,2]. * Setting 4. Multi-product lost sales inventory problem with n=3049 and capacity constraints. Demands are taken from the real-world dataset of the M5 competition <cit.>. * Setting 5. Multi-product lost sales inventory problem with n=3049 and box constraints. As in Setting 4 we considered demands from the M5 competition dataset <cit.>. We use the newsvendor loss in all the settings. In all the settings the cost parameters satisfy p_i/h_i=200 since this ratio is known to exceed 200 in many applications <cit.>. In settings 1, 2 and 3, h_i=1 and in settings 4 and 5, h_i and p_i are proportional to the real average selling costs. In settings 1, 2, 3 and 4 we compare MaxCOSD aginast the following baselines: AIM <cit.> for Setting 1, CUP <cit.> for Setting 2, and DDM <cit.> for settings 3 and 4. In Setting 5, we ran a parallelized version of MaxCOSD, that is, one instance of MaxCOSD per product, and a parallelized version of AIM. Notice that in Settings 4 and 5 demands are not i.i.d. and thus, do not fit the assumptions of the baselines considered. All the algorithms have been initialized with y_1=0. Settings 1, 2 and 3 have been run 10 times, with different demand realizations generated through independent samples. Figure <ref> shows, for every setting, the regret obtained after T periods as a function of the learning rate parameter γ∈ [10^-5, 10^1]. We picked T=1969 for all the settings because it corresponds to the number of periods available in our real-world dataset <cit.>. We see that MaxCOSD performs well compared to baselines whenever the number of handled products remains low (Settings 1,2,3,5). Instead, we see that MaxCOSD is less efficient when the number of products becomes large, in particular in the Setting 4. Remember that in this setting the baseline DDM has no whatsoever theoretical guarantees. The performance of MaxCOSD in the large n regime can be explained by the fact that the cycles become longer, as it becomes less likely that the feasibility condition is satisfied. Indeed, we have seen in Lemma <ref> that the larger the overall demand is, the easier feasibility holds. But when n grows, min_i d_i,t becomes smaller, making the problem harder. § CONCLUSION In this paper, we address Online Inventory Optimization problems by introducing MaxCOSD, the first algorithm which can be applied to a wide variety of real-world problems. More precisely, MaxCOSD enjoys an optimal O(√(T)) regret without assuming the demand to be i.i.d., and can handle a large class of dynamics, including perishability models. We achieved this result by applying ideas and methods from online learning to OIO problems. Still, there is a lot of space for improvements and future developments. First, we observed that for problems with a large number of products and capacity constraints, the empirical performance of MaxCOSD could be improved. To do so, one would need to better handle the feasibility constraint, by using for instance projections onto the feasibilitly constraint, an idea already used by DDM, but with no theoretical guarantees so far in real-world scenarios. Second, we obtained regret bounds under minimal structural assumptions on the problem (convex lipschitz losses), but we could expect to obtain better rates by making stronger assumptions on the problem. One such assumption, which is classical in the Online Convex Optimization literature, is to assume further that the losses are strongly convex or exp-concave, typically leading to a logarithmic O(log (T)) regret. An other assumption, which is more specific to the Online Inventory literature, is to make some regularity assumption directly on the demand, also leading to a logarithmic pseudo-regret <cit.>. Finally, even if our model is quite versatile, it has its limits, and more work is needed to improve it. For instance, we have seen in Section <ref> that we do not accommodate for outdating costs, nor can handle discrete feasible sets. Also, we could hope to further weaken our non-degeneracy Assumption <ref>, by making an hypothesis which applies independently to each product, without having to assume pairwise independence across products. We believe that an adequate adaptation of online convex optimization techniques to the online inventory framework will prove to be a successfully strategy for overcoming those challenges. plainnat § ANALYSIS OF MAXCOSD The goal of this appendix is to study the MaxCOSD algorithm (see Algorithm <ref>) and prove Theorem <ref>. To do so, we start by introducing a generalized version of MaxCOSD named Cyclic Online Subgradient Descent (COSD) which has general update periods and learning rates (see Algorithm <ref>). In Proposition <ref> we provide a general analysis of COSD which shows that it is sufficient to control the cycles' length to derive O(√(T)) regret bounds. As a byproduct of this proposition we also derive the classical analysis of OSD (see Corollary <ref>). A way of ensuring that the cycles' lengths are efficiently controlled is through a probabilistic property (see Assumption <ref>) which is similar to sub-exponential concentration <cit.>. Finally, using Lemma <ref> and Assumptions <ref>, <ref> and <ref> we show that when γ∈(0, ρ/D] the cycles' length of MaxCOSD satisfy Assumption <ref> which allows us to conclude with Theorem <ref>. §.§ Design of COSD The Cyclic Online Subgradient Descent (COSD) generalizes MaxCOSD by allowing arbitrary learning rates and update periods. Recall that update periods are allowed to be dynamically defined, for this reason we will refer to a sequence of update periods (t_k)_k∈ as an update strategy which is formally defined below. An update strategy (t_k)_k∈⊂ is a sequence of random variables such that, t_1=1, t_k<t_k+1 for every k∈, and for every t∈, k∈, the event {t_k=t} is observable by the manager at the beginning of period t, i.e. it belongs to σ(g_1,x_2,…,g_t-1, x_t). The pseudo-code of COSD is given in Algorithm <ref>. By choosing appropriately the update strategy we recover the following algorithms: * OSD. If updates are made at every period, i.e. t_k=k, we recover OSD. Its feasibility is not guaranteed. * Minibatch OSD. Given a fixed minibatch size τ∈ℕ, the updates period are defined offline via t_k = 1 + (k-1)τ. * CUP. For every k ∈ℕ, t_k is defined as the periods where the inventory is empty, that is, t_1=1, t_k+1 = inf{t≥ t_k +1 : x_t ≼0} for k∈. This update strategy corresponds to that used by CUP <cit.>. One easily sees that for such update periods t_k, feasibility always holds. * MaxCOSD. To recover MaxCOSD we define dynamically the update strategy as the periods where the candidate order-up-to levels (ŷ_t)_t∈ are feasible. Formally, it writes: t_1=1, t_k+1 = inf{t≥ t_k+1: x_t ≼ŷ_t} for k∈, where (ŷ_t)_t∈ is defined by: ŷ_1=y_1, ŷ_t+1 = _(ŷ_t_k-η_t∑_s=t_k^tg_s), with k=max{ j≥1 : t_j ≤ t } or equivalently t∈_k={t_k, …, t_k+1-1}. Notice that at update periods the implemented order-up-to levels coincide with the candidate order-up-to level, that is, we have y_t_k=ŷ_t_k for all k∈. §.§ Analysis of COSD and OSD The following proposition summarizes the main properties of COSD. For convenience, we will use the notation t̅_k := t_k+1 - 1 for the last period of the k^th cycle _k. Let assumptions <ref>.<ref> and <ref>.<ref> be satisfied. Given any update strategy and any sequence of learning rates (η_t)_t∈ such that: 0<η_t̅_k+1≤η_t̅_k for all k∈, COSD (see Algorithm <ref>) has the following properties: * it is feasible if and only if for all k∈{2,3,…}, y_t_k≽ x_t_k, * for all K∈, the regret at the end of the K^th cycle satisfies, R_t̅_K≤D^2/2η_t̅_K +1/2∑_k=1^Kη_t̅_k∑_t∈𝒯_k g_t^2_2. Let us prove claim <ref>. By definition, the algorithm produces feasible sequence of order-up-to levels if y_t≽ x_t for all t∈. But for all t∈{t_k+1,…, t̅_k} for some k∈, we have necessarily y_t=y_t-1≽y_t-1-d_t-1≽ x_t where the last inequality comes from (<ref>). Thus we only need to check feasibility at the update periods, i.e. check that y_t_k≽ x_t_k for all k∈. Furthermore, it is clear that the latter is verified for k=1 since x_t_1=x_1= 0≼ y_t_1. Now to prove the regret bound (claim <ref>) we follow the lines of the classical analysis of OSD (see e.g. the proof of <cit.> or that of <cit.>) when run against the sequence of losses (∑_t∈_kℓ_t)_k∈ instead of (ℓ_t)_t∈. For convenience, we will write =∑_t∈_k g_t for all k∈. Let y∈, K∈ and k∈ [K], we start by bounding ∑_t∈_kℓ_t(y_t)-ℓ_t(y). By definition of the subgradient g_t∈∂ℓ_t(y_t) we have: ∑_t∈_kℓ_t(y_t)-ℓ_t(y) = ∑_t∈_kℓ_t(y_t_k)-ℓ_t(y) ≤,y_t_k-y. We rewrite this bound as follows: ,y_t_k-y = 1/2η_t̅_k(y_t_k-y_2^2+η_t̅_k^2_2^2 - (y_t_k-y)-η_t̅_k_2^2). By using the property of non-expansiveness of the Euclidean projection we have: y_t_k+1 - y_2^2 = _𝒴( y_t_k - η_t̅_k) - _𝒴(y)_2^2 ≤(y_t_k-η_t̅_k) -y _2^2. Combining the last two results leads to: ,y_t_k-y≤1/2η_t̅_k(y_t_k-y_2^2+η_t̅_k^2_2^2 - y_t_k+1 - y_2^2 ). Finally, we combine the last inequality with inequality (<ref>) and sum these over k=1,…,K. ∑_t=1^t̅_Kℓ_t(y_t)-ℓ_t(y) ≤∑_k=1^K, y_t_k-y ≤∑_k=1^K 1/2η_t̅_k( y_t_k-y_2^2-y_t_k+1-y_2^2) +∑_k=1^K η_t̅_k/2_2^2 =1/2(y_t_1-y_2^2/η_t̅_1 -y_t_K+1-y_2^2/η_t̅_K + ∑_k=1^K-1(1/η_t̅_k+1-1/η_t̅_k)y_t_k+1-y_2^2) +∑_k=1^K η_t̅_k/2_2^2 ≤1/2(D^2/η_t̅_1 + ∑_k=1^K-1(1/η_t̅_k+1-1/η_t̅_k)D^2) +∑_k=1^K η_t̅_k/2_2^2 = D^2/2η_t̅_K +∑_k=1^K η_t̅_k/2_2^2. Claim <ref> is thereby proved by taking the supremum over y∈. Proposition <ref> gives us guidelines to design optimal update cycles and learning rates. First, this proposition states that it suffices to verify the feasibility constraint at the update periods to ensure that the whole sequence of order-up-to levels produced by COSD is feasible. On the other hand, to guarantee that this algorithm has a sublinear regret we will need to ensure that the cycles 𝒯_k are not too long, i.e. that update periods are frequent enough, and then consider adequate learning rates. Since OSD is an instance of COSD, we can recover classical regret bounds of OSD (see e.g. <cit.> or <cit.>) and in particular O(√(T)) regret bounds, as a corollary of Proposition <ref>. Let assumptions <ref>.<ref> and <ref>.<ref> be satisfied. Given positive non-increasing learning rates (η_t)_t∈, i.e. 0<η_t+1≤η_t for all t∈, the regret of OSD (see Algorithm <ref>) satisfies for all T∈, R_T ≤D^2/2η_T+∑_t=1^Tη_t/2g_t_2^2. In particular, if η_t = γ D/(G √(t)) with γ >0, we have R_T ≤(1/2γ+γ) G D √(T). The regret bound follows from claim <ref> of Proposition <ref>. Indeed, by taking t_k=k for all k∈, COSD coincide with OSD, thus, for K=t̅_K=T we obtain the desired regret bound. §.§ Controlling the cycles' length In this subsection we analyze COSD with eventually unbounded cycles as it is the case for the CUP or the MaxCOSD update strategy. We start by introducing an assumption on the cycles called geometric cycles which yields expected regret bounds and high probability regret bounds. [Geometric cycles] Let μ∈(0,1]. An update strategy (t_k)_k∈ has μ-geometric cycles if there exists C_μ≥ 1 such that for any m∈ and k∈ we have: t_k+1-t_k>m≤ C_μ (1-μ)^m. The name geometric cycles is motivated by the fact that if ξ is a geometric random variable with parameter μ, i.e. ξ∈ and ξ>m=(1-μ)^m for all m∈, then Assumption <ref> rewrites: t_k+1-t_k>m≤ C_μξ>m for all k, m∈. This property resembles sub-exponential concentration <cit.>. The following proposition summarizes some important properties of geometric cycles. Consider an update strategy (t_k)_k∈ with μ-geometric cycles (see Assumption <ref>), then, * For all k∈, t_k+1-t_k≤ C_μ /μ and (t_k+1-t_k)^2≤ C_μ (2-μ)/μ^2≤ 2C_μ /μ^2. * For all K∈ and any confidence level δ∈(0,1) we have with probability at least 1-δ, √(∑_k=1^K (t_k+1-t_k)^2)≤(1+log(KC_μ /δ)/μ)√(K). First, let us observe that if μ=1 in Assumption <ref> then, t_k=k for all k∈ and one can easily check that all the claims are indeed verified. In the following we assume μ∈(0,1). Let ξ be a geometric random variable of parameter μ, that is, ξ∈ and ξ>m=(1-μ)^m for all m∈. Then, it is well-known that ξ=1/μ and ξ^2=(2-μ)/μ^2. We now prove the first claim by means of direct computations. For all k∈, we have: t_k+1-t_k=∑_m=0^+∞t_k+1-t_k>m≤∑_m=0^+∞C_μξ>m = C_μξ = C_μ/μ. Also, using the fact that for any integer a and real number b, we have, a>b if and only if a>⌊ b ⌋, we can upper bound (t_k+1-t_k)^2 as follows: (t_k+1-t_k)^2 =∑_m=0^+∞(t_k+1-t_k)>√(m) = ∑_m=0^+∞(t_k+1-t_k)>⌊√(m)⌋ ≤∑_m=0^+∞C_μξ>⌊√(m)⌋ = C_μξ^2 = C_μ2-μ/μ^2≤2C_μ/μ^2. Finally, let us prove the second claim. Let K∈ and ε>0, it is classical to see that: √(∑_k=1^K (t_k+1-t_k)^2)>ε ≤∃ k ∈ [K], (t_k+1-t_k)^2>ε^2/K ≤∑_k=1^K (t_k+1-t_k)>⌊ε/√(K)⌋. We use Assumption <ref>, which yields: √(∑_k=1^K (t_k+1-t_k)^2)>ε≤ KC_μ(1-μ)^⌊ε/√(K)⌋≤ KC_μ(1-μ)^(ε/√(K))-1. Now for δ∈(0,1) we plug ε = (1+log(δ/(KC_μ))/log(1-μ))√(K) > 0 in the last result, to obtain: √(∑_k=1^K (t_k+1-t_k)^2)≤(1+log(δ/(KC_μ))/log(1-μ))√(K), with probability at least 1-δ. We conclude by simply observing that -1/log(1-μ)≤ 1/μ. Using these tools, we are now ready to provide strong regret bounds for COSD under the assumption of geometric cycles. We are going from now on to focus on adaptive learning rates (see Eq. <ref>) which will allow us to unlock high probability regret bounds. Consider an inventory problem satisfying Assumption <ref> and COSD (see Algorithm <ref>) with adaptive learning rates (see Eq. (<ref>)) and an update strategy with μ-geometric cycles (see Assumption <ref>), then, the following regret bounds hold for all T∈, R_T≤√(2C_μ)/μ D G (1/2γ+γ + 1)√(T), and for any confidence level δ∈(0,1) we have with probability at least 1-δ, R_T ≤ DG (1/2γ+γ+1)(1+1/μlog(TC_μ/δ))√(T). Finally, COSD is feasible if and only if for all k∈{2,3,…}, y_t_k≽ x_t_k. Let T∈ and K = min{k≥ 1, t_k≤ T}. We start by bounding the regret R_T in terms of the regret at the end of K-1^th cycle R_t̅_K-1=R_t_K-1 and a remainder term as follows: R_T ≤ R_t̅_K-1 + sup_y∈∑_t=t_K^T ℓ_t(y_t)-ℓ_t(y) ≤ R_t̅_K-1 + sup_y∈∑_t=t_K^T g_t,y_t-y. where we used g_t∈∂ℓ_t(y_t). Applying Cauchy-Schwartz inequality, Assumption <ref> and T ≤ t_K+1-1 leads us to: R_T ≤ R_t̅_K-1 + DG(t_K+1-t_K). Let us now bound R_t̅_K-1. As a consequence of claim <ref> of Proposition <ref> and after substituting η_t by its value and applying Lemma <ref> provided in Appendix <ref> to f(x)=1/(2√(x)), a_0=0 and a_k=∑_t∈𝒯_k g_t^2_2 for k∈[K-1] we obtain: R_t̅_K-1≤ D (1/2γ + γ)√(∑_k=1^K-1∑_t∈𝒯_k g_t^2_2 )≤ D G (1/2γ + γ)√(∑_k=1^K-1 (t_k+1-t_k)^2 ). Combining this with inequality (<ref>) leads us to: R_T ≤ D G ((1/2γ + γ)√(∑_k=1^K-1 (t_k+1-t_k)^2 ) + (t_K+1-t_K)) ≤ D G (1/2γ + γ+1)√(∑_k=1^K (t_k+1-t_k)^2 ). To obtain the expected regret bound, we start by noticing that K≤ T then taking the expectation in this last inequality, applying Jensen's inequality, then, Proposition <ref> that bounds (t_k+1-t_k)^2≤ 2C_μ/μ^2 we end up with the desired bound. Finally, to obtain the high probability regret bound, we notice again that K≤ T and then apply the high probability bound of Proposition <ref>. §.§ Proof of Theorem <ref> In the following lemma we claim that under the assumptions of Theorem <ref>, MaxCOSD with the appropriate learning rates has geometric cycles (see Assumption <ref>). Consider an inventory problem and let assumptions <ref>, <ref> and <ref> hold. Then, MaxCOSD with learning rates defined in Eq. (<ref>) and γ∈ (0,ρ/D] has μ-geometric cycles with C_μ=1. First, notice that if min_i∈[n]d_t,i≥ρ then x_t+1≼ŷ_t+1, i.e. t+1 is an update period for MaxCOSD. This is a consequence of Lemma <ref>, which applies since we have: ŷ_t+1-y_t = _𝒴( ŷ_t_k - η_t ∑_s=t_k^t g_s )-ŷ_t_k≤η_t ∑_s=t_k^t g_s_2 ≤γ D ≤ρ≤min_i∈[n]d_t,i. Now let k∈ and m∈. Using this initial observation we have: t_k+1-t_k>m = x_t_k+1ŷ_t_k+1, …, x_t_k+mŷ_t_k+m ≤min_i∈[n] d_t_k,i < ρ, …, min_i∈[n] d_t_k+m-1,i < ρ = ∑_s≥ 1t_k=s, min_i∈[n] d_s,i<ρ, …, min_i∈[n] d_s+m-1,i<ρ The last step of this proof is showing that in fact, t_k=s, min_i∈[n] d_s,i<ρ, …, min_i∈[n] d_s+m-1,i<ρ≤t_k=s(1-μ)^m. We prove this by a simple induction over m≥1. Noticing that {t_k=s}∈σ(g_1,x_2,…,g_s-1,x_s)⊂σ(ℓ_1,d_1,…,ℓ_s-1,d_s-1), where the last inclusion comes from Assumption <ref>, and using the basic properties of conditional expectations we derive the inequality for m=1: t_k=s, min_i∈[n] d_s,i < ρ = t_k=s, min_i∈[n] d_s,i < ρ | ℓ_1,d_1,…,ℓ_s-1, d_s-1 =t_k=smin_i∈[n] d_s,i < ρ | ℓ_1,d_1,…,ℓ_s-1, d_s-1 ≤t_k=s(1-μ), where the last inequality comes from Assumption <ref>. Assume now the relation (<ref>) holds for m, let us prove in a similar way that it holds for m+1. t_k=s, min_i∈[n] d_s,i<ρ, …, min_i∈[n] d_s+m,i<ρ = t_k=s, min_i∈[n] d_s,i<ρ, …, min_i∈[n] d_s+m,i<ρ | ℓ_1,d_1,…,ℓ_s+m-1,d_s+m-1 = t_k=smin_i∈[n] d_s,i<ρ⋯min_i∈[n] d_s+m-1,i<ρmin_i∈[n] d_s+m,i<ρ | ℓ_1,d_1,…,ℓ_s+m-1,d_s+m-1 ≤ t_k=s, min_i∈[n] d_s,i<ρ, …, min_i∈[n] d_s+m-1,i<ρ (1-μ) ≤ t_k=s(1-μ)^m+1. Summing the relations (<ref>) over s≥1 leads to the final bound: t_k+1-t_k>m≤ (1-μ)^m, which is our claim. We are now ready to prove Theorem <ref>. By definition, MaxCOSD is always feasible (independently of the learning rates chosen). Lemma <ref> ensures that when γ∈(0,ρ/D], MaxCOSD with adaptive learning rates as defined in Eq. (<ref>) has μ-geometric cycles with C_μ=1. Thus, Corollary <ref> applies and leads to the regret bounds we claimed. § OTHER LEMMAS Let a_0,a_1,…,a_K be non-negative numbers and f:_+→_+ a measurable non-increasing function, we have: ∑_k=1^K a_k f(a_0+∑_m=1^k a_m) ≤∫_a_0^∑_k=0^K a_k f(x)dx. Define s_k=∑_m=0^k a_m. The following holds for all k∈[K], a_kf(a_0+∑_m=1^k a_m)=a_kf(s_k)=∫_s_k-1^s_kf(s_k)dx≤∫_s_k-1^s_kf(x)dx. Summing over k=1,…,K leads to the desired bound. Consider an inventory problem that satisfies Assumption <ref>, then, the regret of any algorithm is bounded as follows: R_T ≤ DGT. Let y∈. For all t∈, we have: ℓ_t(y_t)-ℓ_t(y)≤g_t,y_t-y≤g_t_2y_t-y_2≤ GD, where we used the definition of the subgradient g_t∈∂ℓ_t(y_t), then, Cauchy-Schwartz inequality and finally Assumption <ref>. Summing these inequalities over t=1,…,T and taking the supremum over y∈ leads to the desired bound. Consider the newsvendor cost function c defined in (<ref>). Let d∈^n. A vector g∈^n is a subgradient of the function of c(·,d) at y∈^n if and only if for all i∈[n] we have: g_i ∈{h_i}, if y_i>d_i [-p_i,h_i], if y_i=d_i {-p_i}, if y_i<d_i. In particular, denoting s=min{y,d}, the vector (h_iy_i>s_i - p_i y_i=s_i)_i∈[n] is a subgradient of c(·,d) at y. § POSTPONED PROOFS §.§ Proof of Proposition <ref> Formally, in the lost sales single-product newsvendor setting with observable demand over =[0,D], a feasible deterministic algorithm is defined by a sequence of functions (Y_t)_t∈ of the form Y_t : _+^t-1→ [0,D] satisfying Y_t+1(d_1,…,d_t) ≥Y_t(d_1,…,d_t-1)-d_t for all d_1,…,d_t∈_+. Let d̃∈(0,D], and consider a constant demand sequence defined by d̃_t := d̃ at every period t ∈. Consider now (ỹ_t)_t ∈ the sequence of order-up-to levels generated by this algorithm when facing this constant demand, that is, ỹ_t := Y_t(d̃,…,d̃). We will now distinguish two cases. * Consider the case where ỹ_t=0 for all t ∈. Taking y=d̃ in the regret definition (<ref>) yields: R_T ≥∑_t=1^T c(0,d̃) - ∑_t=1^T c(d̃,d̃)=Tpd̃=Ω(T). * Now, consider the case where there exists T_0≥ 1 such that ỹ_1=…=ỹ_T_0-1=0 and ỹ_T_0>0. Consider a new demand sequence (d_t)_t ∈ defined as follows: d_1=…=d_T_0-1=d̃ and d_t=0 for t≥ T_0. Denote by (y_t)_t ∈ the sequence of order-up-to levels generated by the algorithm against the demand sequence (d_t)_t ∈, that is, y_t=Y_t(d_1,…,d_t-1). Since the algorithm is deterministic, we also have: y_t=0 for t≤ T_0-1. Indeed, we have, y_t=Y_t(d_1,…,d_t-1) = Y_t(d̃,…,d̃) = ỹ_t=0. On the other hand, we have for all t≥ T_0+1, y_t≥y_t-1-d_t-1=y_t-1, thus, y_t≥ y_T_0=ỹ_T_0>0 for all t≥ T_0. Taking y=0 in the regret definition (<ref>) we obtain for T≥ T_0, R_T ≥∑_t=1^T c(y_t,d_t)-c(0,d_t) = ∑_t=T_0^T c(y_t,0) = ∑_t=T_0^T h y_t≥ (T-T_0+1) h ỹ_T_0=Ω(T). §.§ Proof of Proposition <ref> Take (d_t)_t∈⊂_++ such that ∑_t=1^∞ d_t <y_1 and define C=y_1-∑_t=1^∞ d_t >0. Notice that due to the feasibility constraint (<ref>) and the lost sales dynamic we have for all t∈, y_t+1≥ x_t+1=y_t-d_t≥ y_t-d_t. Thus, we have for every t∈, y_t = y_1 + ∑_s=1^t-1 (y_s+1-y_s) ≥ y_1 - ∑_s=1^t-1 d_s ≥ C. Now take the losses ℓ_t(y)=y, then, R_T=∑_t=1^T y_t ≥ C T for all T∈. §.§ Proof of Lemma <ref> Since y-d=max{y-d,0} and y'≽ 0, it is enough to show that y'≽ y-d. The latter holds since, for any i∈[n], we have y_i - y'_i≤y'-y_2 ≤min_i∈[n]d_i≤ d_i. §.§ Proof of Theorem <ref> The regret bound follows from the classical analysis of OSD, see our Corollary <ref> or the proof of <cit.>. Thus, we only need to show the feasibility. For all t∈, we have, y_t+1-y_t_2=_(y_t-η_t g_t)-y_t_2 ≤η_t g_t_2=γ D/G√(t)g_t_2≤ρ/√(t)≤ρ, the first inequality is provided by the property of non-expansiveness of the Euclidean projection, the second inequality comes from γ≤ρ/D and g_t_2≤ G, and the last inequality from √(t)≥1. Combining this result with Assumption <ref>, we obtain y_t+1-y_t_2≤min_i∈[n] d_t,i. According to Lemma <ref> and the inventory dynamical constraint (<ref>) this guarantees feasibility. § DISCUSSION §.§ On the relation between Online Inventory Optimization and Online Convex Optimization In the usual Online Convex Optimization (OCO) framework introduced by <cit.> a decision-maker and an environment interact as follows: at every time period t∈, first, the decision-maker chooses a decision y_t∈ and the environment chooses a loss function ℓ_t, then, the decision-maker receive some feedback which usually consist of either the loss function itself ℓ_t (full-information setting), the loss incurred ℓ_t(y_t) (bandit setting) or a subgradient g_t∈∂ℓ_t(y_t) (first-order feedback setting). The goal of the decision-maker is to minimize its cumulative loss incurred ∑_t=1^T ℓ_t(y_t) . OIO extend the OCO framework by adding the feasibility constraint (<ref>). A naive solution to accommodate such constraints into the OCO framework is by adding to the losses the convex indicator of the feasibility constraint, that is, by considering the losses ℓ̃_t(y)=ℓ_t(y)+χ_t(y), where χ_t(y) takes the value 0 if y≽ x_t holds and +∞ otherwise. However, this is not satisfactory since by doing so we also alter the regret (<ref>) by imposing on the competitor y∈ the feasibility constraints associated to the algorithm. Many extensions of the OCO framework have been developed over the years. Of particular interest, those which include constraints of different forms like: * OCO with long-term constraints <cit.>, where m convex constraints of the form f_i(·)≤ 0 for i∈[m] should be satisfied in the long run, that is, the goal is to minimize the cumulative loss while keeping low constraint violation ∑_t=1^T f_i(y_t) for each i∈[m]. * OCO with long-term and time-varying constraints <cit.> which compared to the previous extension, considers time-varying convex constraints of the form f_t,i(·)≤ 0 where f_t,i is revealed at the end of time period t. Even as long-term constraints, this learning task is known to be unsolvable in general (see e.g. <cit.> or <cit.> for more precise statement), thus, restricted notions of regret have been considered in this context. * OCO with ramp constraints <cit.>, where at each time period t∈{2,3,…} the decision-maker should choose y_t∈ such that |y_t,i-y_t-1,i|≤κ_i for all i∈[n]. We argue that OIO problems are different from these extensions. Indeed, our feasibility constraints (<ref>) are neither long-term constraint since we do not allow for violations, nor ramp constraints since the bounds are time-varying. Also, our task is further challenging since we aim at bounding the regret (<ref>) based on a competitor y∈ which does not suffer from the feasibility constraint (<ref>). §.§ On the notion of pseudo-regret There exists an alternative notion of regret we call the pseudo-regret R̅_T which is in fact more common in the literature of online inventory problems <cit.>. It is defined as the difference between the expected cumulative loss of the algorithm and that of the best fixed constant strategy, that is, formally R̅_T = ∑_t=1^Tℓ_t(y_t)-inf_y∈∑_t=1^Tℓ_t(y). The difference between the expected regret R_T and the pseudo-regret R̅_T is in the competitor y. In the former, the competitor is random and depends on the realization of the losses, whereas, in the latter the competitor is fixed and depends only on the distribution of the losses. Notice that we always have R̅_T ≤R_T, thus, an upper bound obtained on the expected regret applies directly to the pseudo-regret.
http://arxiv.org/abs/2307.04415v1
20230710084328
Episodic Gaussian Process-Based Learning Control with Vanishing Tracking Errors
[ "Armin Lederer", "Jonas Umlauft", "Sandra Hirche" ]
eess.SY
[ "eess.SY", "cs.LG", "cs.SY", "stat.ML" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Episodic Gaussian Process-Based Learning Control with Vanishing Tracking Errors Armin Lederer, Graduate Student Member, IEEE, Jonas Umlauft, Sandra Hirche, Fellow, IEEE, Armin Lederer, Jonas Umlauft and Sandra Hirche are with the Chair of Information-oriented Control (ITR), School of Computation, Information and Technology, Technical University of Munich, 80333 Munich, Germany (email: armin.lederer, jonas.umlauft, [email protected]). Received / Accepted ================================================================================================================================================================================================================================================================================================================================================================================ Due to the increasing complexity of technical systems, accurate first principle models can often not be obtained. Supervised machine learning can mitigate this issue by inferring models from measurement data. Gaussian process regression is particularly well suited for this purpose due to its high data-efficiency and its explicit uncertainty representation, which allows the derivation of prediction error bounds. These error bounds have been exploited to show tracking accuracy guarantees for a variety of control approaches, but their direct dependency on the training data is generally unclear. We address this issue by deriving a Bayesian prediction error bound for GP regression, which we show to decay with the growth of a novel, kernel-based measure of data density. Based on the prediction error bound, we prove time-varying tracking accuracy guarantees for learned GP models used as feedback compensation of unknown nonlinearities, and show to achieve vanishing tracking error with increasing data density. This enables us to develop an episodic approach for learning Gaussian process models, such that an arbitrary tracking accuracy can be guaranteed. The effectiveness of the derived theory is demonstrated in several simulations.=-1 Gaussian processes, machine learning, uncertain systems, data-driven control. § INTRODUCTION For many technical systems, no or only partial first principle models are available due to their complexity or a priori unknown operating conditions. Since measurement data of such systems can typically be obtained, inferring models using supervised machine learning techniques has become increasingly popular in recent years <cit.>. In particular, Gaussian process (GP) regression <cit.> is a popular method since it is very data-efficient <cit.> and exhibits closed-form expressions for model updates allowing on-line learning <cit.>. Moreover, GP models provide an explicit measure for prediction uncertainty, which enables the confidence-based distributed aggregation of GP models <cit.>, and allows to tune the behavior of control towards curiosity <cit.> or cautiousness <cit.>. In addition to these beneficial properties, GP regression is particularly appreciated in safety-critical control due to the existence of prediction error bounds <cit.>. These bounds are typically based on the close relationship between kernel methods and GPs <cit.>, such that the reproducing kernel Hilbert space norm induced by the GP can be used as a measure of function complexity. By combining bounds on this norm and assumptions about observation noise distributions, statistical prediction error bounds can be derived <cit.>. They can be efficiently computed on-line in an optimization-based fashion <cit.>, but data-dependent closed-form expressions also exist <cit.>. Moreover, they reduce to deterministic bounds when the observation noise is bounded <cit.>. Based on the prediction error bounds for learned GP models, tracking accuracy guarantees for a large variety of control laws have been derived. This can be achieved using Lyapunov theory, e.g., for feedback linearization <cit.>, computed torque control <cit.> and sliding mode control <cit.>, by extending stability properties of nominal model predictive control, e.g., using continuity arguments <cit.>, or robust linear control, e.g., through integral quadratic constraints <cit.>. However, these approaches suffer from the crucial drawback that accuracy guarantees are global, even though the prediction error bounds from GP models are state-dependent. Therefore accuracy guarantees can be very loose in cases with inhomogeneously distributed training data over the state space. In such a case, the guarantees would be dominated globally by the most conservative bound derived from the region with the fewest training data. In general, the data dependency of such accuracy guarantees for model-based control methods has barely been analyzed in detail. While it can be shown for feedback linearization with event-triggered on-line learning that the tracking error vanishes with growing noise-free data set <cit.>, similar results for noisy data do not exist. Moreover, this result is limited to feedback linearizing controllers to the best of our knowledge and does not extend to other approaches. Finally, on-line learning with GPs can be realized using suitable approximations in principle <cit.>, but it remains computationally expensive, such that it is not applicable to systems with limited computational resources. The computationally less demanding approach of episodic, off-line learning has been investigated in the context of optimization-based controller tuning approaches <cit.>, which can be shown to provide data-dependent performance guarantees due to the close relationship to Bayesian optimization <cit.>. While these guarantees can be extended to model-based reinforcement learning <cit.>, they strongly rely on the solved optimization problems, such that they do not generalize to a wider class of control techniques. Therefore, no guarantees and conditions for the convergence of accuracy guarantees for model-based control laws employing GP models exist to the best of our knowledge. Consequently, it is an open question how we can learn a GP model in order to ensure a desired tracking error bound with such learning-based controllers. §.§ Contribution and Structure The main contribution of this article is a novel episodic learning approach for GP models in order to ensure arbitrary tracking accuracy when the GP is used to compensate unknown nonlinearities in control. Such nonlinearities can be found in a wide range of applications ranging from underwater vehicles, where unmodeled hydrodynamic forces due to currents can appear <cit.>, to physical human-robot interaction, where humans introduce generally unknown torques <cit.>. For the development of this approach, we first derive an easily interpretable prediction error bound for GPs by exploiting their Bayesian foundations. In order to allow its straightforward computation, we provide probabilistic Lipschitz bounds for unknown functions based on the GP prior. Based on these results, we propose a kernel-based measure to evaluate the training data density, whose flexibility we demonstrate by exemplarily illustrating it for squared exponential (SE), Matérn class and linear kernels. Moreover, we show that prediction error bounds directly depend on this data density measure, which allows us to prove vanishing prediction errors with growing data density. Based on this analysis of the GP prediction error, we derive a novel, data density-dependent tracking error bound for control laws in linear systems which employ the GP model for compensation of an unknown nonlinearity. Finally, we extend these accuracy guarantees to establish a direct relationship with the proposed data density measure, which allows us to develop an episodic approach for learning a GP model ensuring a specified tracking error bound. This article is based on our prior work <cit.>, which purely focuses on the derivation of probabilistic prediction error bounds depending on the posterior variance of Gaussian processes. It significantly extends these preliminary results by establishing a direct relationship between the training data density and prediction error bounds. Due to this relationship, we can bound the tracking error of linear systems with an unknown nonlinearity compensated by a learned model directly in terms of the data density. This allows us to actively generate training data for achieving arbitrary tracking accuracy in an episodic approach, while <cit.> only bounds the tracking error of feedback linearizing controllers with models learned from a given data set. Therefore, we extend the analysis framework from our prior work <cit.> to a design method. The remainder of this article is structured as follows: We briefly introduce Gaussian process regression and formalize the considered problem setting in <ref>. In <ref>, we derive a novel Bayesian prediction error bound for GP regression and provide methods to determine all relevant parameters based on the prior distribution. We develop a kernel-dependent measure of data density and establish a straightforward relationship to the GP variance, which allows us to investigate the asymptotic behavior of the error bound with increasing data set size in <ref>. In <ref>, we exploit these results to derive time-varying and time-independent tracking error guarantees, which we exploit to develop a novel episodic learning algorithm for ensuring arbitrary tracking accuracy. Finally, in <ref>, we evaluate the developed theoretical framework in different simulations to demonstrate its effectiveness, before we conclude the paper in <ref>. §.§ Notation Vectors/matrices are denoted by lower/upper case bold symbols, the n× n identity matrix by I_n, the Euclidean norm by ·, and λ_min(A) and λ_max(A) the minimum and maximum real parts of the eigenvalues of a matrix A, respectively. Sets are denoted by upper case black board bold letters, and sets restricted to positive/non-negative numbers have an indexed +/+,0, e.g., ℝ_+ for all positive real valued numbers. The cardinality of sets is denoted by |·| and subsets/strict subsets are indicated by . Class 𝒪 notation is used to provide asymptotic upper bounds on functions. The ceil and floor operator are denoted by ⌈·⌉ and ⌊·⌋, respectively. The Gaussian distribution with mean μ∈ℝ and variance σ^2∈ℝ_+ is denoted by 𝒩(μ,σ^2). A chi-squared distribution with N degrees of freedom is denoted by χ^2_N. The expectation operator E[·] can have an additional index to specify the considered random variable. Finally, a function α:ℝ_0,+→ℝ_0,+ is in class 𝒦_∞ if it is monotonically increasing and α(0)=0, lim_x→∞α(x)=∞. =-1 § PRELIMINARIES AND PROBLEM SETTING In this paper, we consider the problem of controlling linear systems perturbed by an unknown nonlinearity such that they track reference trajectories with a prescribed accuracy. In order to achieve this, we employ models learned via Gaussian process regression as compensation. Therefore, we first introduce the fundamentals of Gaussian process regression in <ref>, before we formalize the problem setting in <ref>. §.§ Gaussian Process Regression A Gaussian process is a stochastic process such that any finite number of outputs, N∈ℕ, is assigned a joint Gaussian distribution with prior mean function m:ℝ^d→ℝ and covariance defined through the kernel k:ℝ^d×ℝ^d→ℝ <cit.>. Without loss of generality, we assume m(·) to equal 0 in the following. In order to perform regression with Gaussian processes, they are considered as a a prior distribution. This allows to employ Bayes' theorem to calculate the posterior distribution given a training data set 𝔻={(x^(n),y^(n)}_n=1^N consisting of N inputs x^(n)∈ℝ^d and targets y^(n)∈ℝ, which are Gaussian perturbed measurements of an unknown function f:ℝ^d→ℝ, i.e., y^(n)=f(x^(n))+ϵ^(n), ϵ^(n)∼𝒩(0,σ_on^2), σ_on^2∈ℝ_+. Due to the properties of Gaussian distributions, the posterior is again a Gaussian process, which yields the posterior mean μ(·) and variance σ^2(·) functions μ(x) =k^T(x)( K+σ_on^2I_N)^-1y, σ^2(x) =k(x,x)-k^T(x)(K+σ_on^2I_N)^-1k(x), where we define the kernel matrix K and the kernel vector k(x) through K_ij=k(x^(i),x^(j)) and k_i(x)=k(x,x^(i)), respectively, with i,j=1,…,N, and y = [y^(1)⋯ y^(N)]^T. §.§ Problem Formulation We consider single-input linear dynamical systems with nonlinear input perturbation of the form ẋ=Ax+b(u+f(x)) with initial condition x(0)=x_0∈𝕏⊆ℝ^d and scalar control input u:ℝ_0,+→𝕌⊆ℝ. The matrix A∈ℝ^d× d and vector b∈ℝ^d are assumed to be known, while we consider f:𝕏→ℝ to be an unknown nonlinearity. This system structure covers a wide range of practical systems and can represent, e.g., systems controlled via approximate feedback linearization <cit.> or backstepping controllers for certain classes of dynamics <cit.>. Note that we merely consider the restriction to single-input systems for notational convenience, but our derived results can be easily generalized to multi-input dynamics. The considered task is to track a bounded reference trajectory x_ref:ℝ_0,+→ℝ^d with the state x(t). In order to enable the accurate tracking of the reference trajectory x_ref(·), we restrict ourselves to references of the form ẋ_ref=Ax_ref+br_ref, where r_ref:ℝ_0,+→ℝ is a reference signal. For tracking the reference trajectory, we can employ a control law u = θ^T(x-x_ref)+r_ref-f̂(x), where θ∈ℝ^d is a control gain vector and f̂:𝕏→ℝ is a model of the unknown nonlinear perturbation f(·). This control law leads to closed-loop dynamics of the tracking error e(t)=x(t)-x_ref(t) given by ė=A_θe + b(f(x)-f̂(x)), where A_θ=A-bθ^T. In order to ensure the stability of these dynamics in the case of exact model knowledge f(x)=f̂(x), we employ the following assumption on A_θ. The matrix A_θ has distinct and non-positive eigenvalues, which decrease monotonically with the parameters θ, i.e., there exists a class 𝒦_∞ function α:ℝ_0.+→ℝ_0,+ such that λ_max(A_θ)≤-α(θ). This assumption essentially requires the controllability of the pair (A,b) <cit.>, which allows the eigenvalues of the matrix A_θ to be considered as design parameters, e.g., using methods such as pole placement. Since controllability is a common requirement in linear systems theory, <ref> is not restrictive. Note that the requirement of distinct eigenvalues is only required to simplify the presentation in the following sections by ensuring diagonalizability of A_θ, but can be avoided by generalizing the derivations using Jordan blocks <cit.>. While <ref> ensures that the error dynamics (<ref>) do not diverge, the tracking precision crucially relies on the accuracy of the model f̂(·). Therefore, we assume to learn it from measurements (x^(n),y^(n)) using Gaussian process regression, such that we can use f̂(x)=μ(x) in the control law (<ref>). Since this merely leads to an approximate compensation of the nonlinearity, exact tracking cannot be ensured in general. Therefore, we consider the problem of learning a Gaussian process model of f(·), such that the tracking error is guaranteed to be probabilistically bounded by a prescribed constant e̅∈ℝ_+, i.e., ℙ(x(t)-x_ref(t)≤e̅,  ∀ t≥ 0)≥ 1-δ for δ∈(0,1). Due to the complexity of this problem, we decompose it into the subproblems of deriving a probabilistic error bound for Gaussian process regression, analyzing the dependency of the error bounds on the training data density, and developing an approach for generating training data with sufficiently high density, such that the prescribed tracking error bound e̅ is satisfied. These subproblems are described in more detail in the following. §.§.§ Probabilistic Regression Error Bounds In order to be able to ensure any bound for the tracking error x-x_ref, it is necessary to find an upper bound for the learning error f(x(t))-μ(x(t)) along the system trajectory x(t). Since we do not know the exact system trajectory x(t) in advance, we consider the problem of bounding the regression error in a compact domain 𝕏⊂ℝ^d. Since the bound must hold jointly for all states x in the domain 𝕏, we refer to it as probabilistic uniform error bound, which is formally defined as follows. Gaussian process regression exhibits a uniformly bounded prediction error on a compact set 𝕏⊂ℝ^d with probability 1-δ if there exists a function η:𝕏→ℝ_0,+ such that P( |f(x)-μ(x)|≤η(x), ∀x∈𝕏)≥ 1-δ. In general, we cannot expect to guarantee a uniformly bounded regression error without any regularity assumptions about the unknown function f(·). Due to the Bayesian foundation of Gaussian processes, we employ their prior distribution for this purpose, which we formalize in the following assumption. The unknown function f(·) is a sample from the Gaussian process 𝒢𝒫(0,k(x,x')). This assumption, which has similarly been used in, e.g., <cit.>, has a twofold implication. On the one hand, it specifies the admissible functions for regression via the space of sample functions, which depends on the employed kernel k(·,·). For example, it is straightforward to see that polynomial kernels can be used to learn polynomial functions of the same degree. Moreover, it is well known that the sample space of GPs with squared exponential kernel contains all continuous functions <cit.>. Therefore, choosing a suitable kernel for ensuring that the unknown function lies in the space of sample functions is usually not a challenging problem in practice. On the other hand, <ref> induces a weighting between possible sample functions due to the Gaussian process probability density. Since we base the derivation of the uniform error bound on this weighting, an unknown function f(·) with low prior probability density would lead to sets {f'(·): |f'(x)-μ(x)|≤η(x) } with a high probability under the GP prior, even though they do not contain the unknown function f(·). Hence, the true function f(·) should have a high probability density under the GP prior. This can be efficiently achieved in practice using suitable kernel tuning methods, e.g., <cit.>, or via a re-calibration of the probability distribution after training <cit.>. Therefore, ensuring a suitable prior distribution is not a severe limitation, such that <ref> is not restrictive in practice. §.§.§ Dependency of Error Bounds on Data Density After a probabilistic uniform error bound η(·) has been derived, we consider the problem of deriving conditions for the training data 𝔻 which ensure that the error bound η(·) stays below a desired value η̅∈ℝ_+. This requires the design of a suitable measure of data density ρ:𝕏→ℝ_+, which reflects the dependency of the error bound η(·) on the data distribution. Therefore, the measure ρ(·) must consider the information structure of the GP induced by the employed kernel k(·,·). Based on the derived density measure ρ(·), the problem of ensuring a learning error bound η̅ reduces to showing that the existence of a lower bound ρ∈ℝ_+ for the data density ρ(·) leads to the implication ρ(x)≥ρ ⇒ η(x)≤η̅(ρ). As we want to be able to ensure arbitrary small learning error bounds η̅(ρ), it must additionally hold that lim_ρ→∞η̅(ρ)=0. §.§.§ Data Generation for Guaranteed Tracking Accuracy Finally, we consider the problem of developing an episodic approach for training data generation, which achieves the necessary data density ρ(·) to ensure the satisfaction of the tracking error bound (<ref>). Firstly, this requires the derivation of a tracking error bound, such that for a given learning error bound η̅, we have η(x_ref(t))≤η̅ ⇒ ℙ(x(t)-x_ref(t)≤υ̅(η̅))≥ 1-δ for some function υ̅:ℝ_0,+→ℝ_0,+. Similarly as in (<ref>), this bound must also vanish asymptotically, i.e., lim_η̅→ 0υ̅(η̅) = 0, in order to admit arbitrarily small tracking error guarantees. Using this tracking error bound and the derived dependency of the learning error bound η(·) on the data density ρ(·), the problem of developing a data generation approach simplifies to finding an episodic roll-out strategy satisfying ρ_i+1>ρ_i, lim_i→∞ρ_i = ∞, where the index i is used to denote the roll-out episode. This ensures that there exists a finite number of episodes N_E∈ℕ such that υ̅(η̅(ρ_N_E))≤e̅. Therefore, finding a roll-out strategy ensuring (<ref>) solves the overall problem of learning a Gaussian process model of f(·) such that a prescribed error bound e̅ is satisfied. § PROBABILISTIC UNIFORM ERROR BOUND In this section, we derive an easily computable uniform error bound for Gaussian process regression based on the prior distribution addressing the problem described in <ref>. We first present the uniform error bound and approaches to compute its parameters in <ref>. Since the bound also relies on the Lipschitz constant of the unknown function, which is not always known a priori, we show how a probabilistic Lipschitz constant can be derived from the prior Gaussian process distribution in <ref>. §.§ Uniform Error Bound based on Lipschitz Continuity Since the prior Gaussian process induces a probability distribution for each point in a compact set 𝕏, we can discretize this set and exploit standard tail bounds for Gaussian distributions to obtain point-wise error bounds <cit.>. If all involved functions are continuous, we can straightforwardly extend these point-wise guarantees yielding the uniform error bound presented in the following. Consider a zero mean prior Gaussian process defined on a compact set 𝕏 and let f:𝕏→ℝ be a continuous unknown function with Lipschitz constant L_f which satisfies <ref>. Assume the GP posterior mean μ(·) and standard deviation σ(·) are continuous with Lipschitz constant L_μ and modulus of continuity ω_σ(·). Moreover, pick δ∈ (0,1), τ∈ℝ_+ and set β_𝕏(τ) =2log(M(τ,𝕏)/δ), γ(τ) =( L_μ+L_f)τ+√(β_𝕏(τ))ω_σ(τ), where M(τ,𝕏) denotes the τ-covering number of 𝕏[The τ-covering number of a set 𝕏 is the smallest number, such there exists a set 𝕏_τ satisfying |𝕏_τ|=M(τ,𝕏) and ∀x∈𝕏 there exists x'∈𝕏_τ with x-x'≤τ.]. Then, the prediction error is uniformly bounded with probability of at least 1-δ on 𝕏 with bound η(x)=√(β_𝕏(τ))σ(x)+γ(τ). We exploit the continuity properties of the posterior mean, variance and the unknown function to prove the probabilistic uniform error bound by exploiting the fact that for every grid 𝕏_τ with |𝕏_τ| grid points and max_x∈𝕏min_x'∈𝕏_τx-x'≤τ it holds with probability of at least 1-|𝕏_τ|e^-β_𝕏(τ)/2 that <cit.> |f(x)-μ(x)|≤√(β_𝕏(τ))σ(x) ∀x∈𝕏_τ. Choose , then |f(x)-μ(x)|≤√(β_𝕏(τ))σ(x) ∀x∈𝕏_τ holds with probability of at least 1-δ. Due to continuity of f(x), μ(x) and σ(x) we obtain min_x'∈𝕏_τ|f(x)-f(x')| ≤τ L_f ∀x∈𝕏 min_x'∈𝕏_τ|μ(x)-μ(x')| ≤τ L_μ ∀x∈𝕏 min_x'∈𝕏_τ|σ(x)-σ(x')| ≤ω_σ(τ) ∀x∈𝕏. Moreover, the minimum number of grid points satisfying (<ref>) is given by the covering number M(τ,𝕏). Hence, we obtain P(|f(x)-μ(x)|≤√(β_𝕏(τ))σ(x)+γ(τ),  ∀x∈𝕏)≥ 1-δ, for β_𝕏(τ) and γ(τ) defined in (<ref>) and (<ref>), respectively. The virtual grid constant τ used in (<ref>) and (<ref>) balances the effect of the state space discretization and the inherent uncertainty measured by the posterior standard deviation σ(·). Therefore, γ(τ) can be made arbitrarily small by choosing a sufficiently fine virtual grid. This in turn increases β_𝕏(τ) and thus the effect of the posterior standard deviation σ(·) on the bound. However, β_𝕏(τ) depends merely logarithmically on τ such that even poor Lipschitz constants L_μ, L_f and moduli of continuity ω_σ(·) can be easily compensated by small virtual grid constants τ. Since the standard deviation σ(·) varies within the state space 𝕏, an optimal virtual grid constant τ, which minimizes the expression √(β_𝕏(τ))σ(x)+γ(τ) for all x∈𝕏, does not exist in general. While simple approaches such as choosing τ such that γ(τ) is negligible for all x∈𝕏 provide satisfying results in our simulations, more complex approaches remain open research questions. It is important to note that most of the parameters in <ref> do not require a difficult analysis such that the bound (<ref>) can be directly evaluated. While the computation of the exact covering number M(τ,𝕏) is a difficult problem for general sets 𝕏, it can be easily upper bounded as illustrated in <ref>. For this reason, we overapproximate the set 𝕏 through a d-dimensional hypercube 𝕏̃ with edge length r. Then, the covering number of 𝕏̃ is bounded by <cit.> M(τ,𝕏̃)≤(r√(d)/2τ)^d, which is by construction also a bound for the covering number of 𝕏, i.e., M(τ,𝕏)≤(r√(d)/2τ)^d. The Lipschitz constant L_μ of the posterior mean in (<ref>) can be straightforwardly bounded when the prior Gaussian process has a Lipschitz continuous kernel, as shown in the following lemma. Consider a zero mean prior Gaussian process defined through the L_k-Lipschitz kernel k(·,·). Then, its posterior mean μ(·) is continuous with Lipschitz constant=-1 L_μ ≤ L_k√(N) (K+σ_on^2I_N)^-1y. The norm of the difference between the posterior mean μ(x) evaluated at two different points is given by μ(x)-μ(x') = (k(x)-k(x')) α, with α=(K+σ_on^2I_N)^-1y. Due to the Cauchy-Schwarz inequality and the Lipschitz continuity of the kernel we obtain μ(x)-μ(x') ≤ L_k√(N)αx-x', which proves Lipschitz continuity of the mean μ(x). Moreover, the assumption of a Lipschitz continuous kernel also suffices to compute the modulus of continuity ω_σ(·) for the posterior standard deviation in (<ref>), as shown in the following lemma.=-1 Consider a zero mean prior Gaussian process defined through the L_k-Lipschitz kernel k(·,·). Then, its posterior standard deviation σ^2(·) is continuous with modulus of continuity=-1 ω_σ(τ) ≤√(2L_kτ). The difference between two different evaluations of the posterior standard deviation is bounded by |σ(x)-σ(x')|≤ d_k(x,x') as shown in <cit.>, where the kernel metric is defined as d_k(x,x')=√(k(x,x)+k(x',x')-2k(x,x')). Due to Lipschitz continuity of the kernel, we have d_k(x,x')≤√(2L_kx-x'), which concludes the proof. For the special case of stationary kernels , the convergence rate of the modulus of continuity ω_σ(·) can even be improved, as shown in the following. Consider a zero mean prior Gaussian process defined through the stationary, L_k-Lipschitz kernel k(·,·). Then, its posterior standard deviation σ(·) is continuous with modulus of continuity ω_σ(τ)=L_στ, where =-1 L_σ = sup_x-x'∈𝕏√(1/2k(0)-2k(x-x'))∇ k(x-x'). For stationary kernels, we can express the kernel metric as d_k(x,x')=d_k(x-x')=√(2k(0)-2k(x-x')). The simplified kernel metric is only a function of x-x', such that the supremum of the norm of the derivative of d_k(·,·) with respect to x-x' is the Lipschitz constant of σ(·). This derivative directly follows from the chain rule of differentation as ∇ d_k(x-x') = √(1/2k(0)-2k(x-x'))∇ k(x-x'), which concludes the proof. While computing the Lipschitz constant L_σ requires the computation of a supremum in general, this optimization problem can be straightforwardly solved analytically for specific kernel choices, e.g., squared exponential kernels <cit.>. Thereby, it allows the efficient computation of a tight modulus of continuity. The remaining open parameter in (<ref>) is the Lipschitz constant L_f of the unknown function f(·). In many applications, in particular in control, rough knowledge of the unknown function is known in advance, which can allow to specify L_f. Even if this constant is a rather poor estimate of the true Lipschitz constant, conservative estimates are not a crucial issue as discussed after <ref>. If no such knowledge of the unknown function f(·) is available, the prior Gaussian process distribution can be employed to derive a probabilistic Lipschitz constant as shown in the following section. §.§ Probabilistic Lipschitz Constants for Gaussian Processes In order to derive a probabilistic Lipschitz constant L_f of the unknown function f(·) from the prior Gaussian process distribution, we exploit the fact that the derivative of a Gaussian process is again a Gaussian process. Therefore, Lipschitz constants can be obtained by adapting results from the well-studied theory of suprema of Gaussian processes. This yields the following lemma, which is based on the metric entropy criterion <cit.>. Consider a Gaussian process with a continuously differentiable covariance function k(·,·) and let L_k denote its Lipschitz constant on the compact set 𝕏 which is included in a cube with edge length r. Then, the expected supremum of a sample function f(·) of this Gaussian process satisfies E[sup_x∈𝕏f(x)]≤ 12√(6d)max{max_x∈𝕏√(k(x,x)),√(rL_k)}. We prove this lemma by making use of the metric entropy criterion for the sample continuity of Gaussian processes <cit.>. This criterion allows to bound the expected supremum of a sample function f(·) by E[ sup_x∈𝕏f(x) ]≤∫_0^max_x∈𝕏√(k(x,x))√(log(N_k(ϱ,𝕏)))dϱ, where N_k(ϱ,𝕏) is the ϱ-packing number of 𝕏 with respect to the kernel metric (<ref>). Instead of bounding the ϱ-packing number, we bound the ϱ/2-covering number, which is known to be an upper bound of the packing number. The covering number can be easily bounded by transforming the problem of covering 𝕏 with respect to the metric d_k(·,·) into a coverage problem in the original metric of 𝕏. For this reason, define ψ(ϱ')=sup_x,x' ∈𝕏 x-x' _∞≤ϱ' d_k(x,x'), which is continuous due to the continuity of the covariance kernel k(·,·). Consider the inverse function ψ^-1(ϱ)=inf{ϱ'>0: ψ(ϱ')>ϱ}. Continuity of ψ(·) implies ϱ=ψ(ψ^-1(ϱ)). In particular, this means that we can guarantee d_k(x,x')≤ϱ/2 if . Due to this relationship it is sufficient to construct a uniform grid with grid constant 2ψ^-1(ϱ/2) in order to obtain a ϱ/2-covering net of 𝕏. Furthermore, the cardinality of this grid is an upper bound for the ϱ/2-covering number, such that we obtain N_k(ϱ,𝕏)≤⌈r/2ψ^-1(ϱ/2)⌉^d. Due to the Lipschitz continuity of the covariance function, we can bound ψ(·) by ψ(ϱ')≤√(2L_kϱ'). Hence, the inverse function satisfies ψ^-1(ϱ/2)≥(ϱ/2√(2L_k))^2 and consequently N_k(ϱ,𝕏)≤(1+4rL_k/ϱ^2)^d holds, where the ceil operator is resolved through the addition of 1. Substituting this expression in the metric entropy bound (<ref>) yields E[sup_x∈𝕏f(x)]≤ 12√(d)∫_0^max_x∈𝕏√(k(x,x))√(log(1+4rL_k/ϱ^2))dϱ. As shown in <cit.> this integral can be bounded by √(6)max{max_x∈𝕏√(k(x,x)), √(rL_k)}, which concludes the proof. While <ref> provides a bound merely for the expected supremum of a sample function, a high probability bound for the supremum can be obtained using the Borell-TIS inequality <cit.>. This is shown in the following result. Consider a Gaussian process with a continuously differentiable covariance function k(·,·). Then, with probability of at least 1-δ_L the supremum of a sample function f(·) of this Gaussian process is bounded by f_sup(δ_L,k(·,·),r)= √(2log( 1/δ_L))max_x∈𝕏√(k(x,x)) +12√(6d)max{max_x∈𝕏√(k(x,x)), √(rL_k)}. We prove this lemma by exploiting the wide theory of concentration inequalities to derive a bound for the supremum of the sample function f(x). We apply the Borell-TIS inequality <cit.>, which ensures for arbitrary c∈ℝ_0,+ that P( sup_x∈𝕏f(x)- E[ sup_x∈𝕏f(x) ] ≥ c )≤exp( -c^2/2max_x∈𝕏 k(x,x)). Due to <ref> we can directly bound E[sup_x∈𝕏f(x)]. Therefore, the lemma follows from substituting (<ref>) in (<ref>) and choosing c=√(2log( 1/δ_L))max_x∈𝕏√(k(x,x)). Since the derivatives of sample functions from Gaussian processes with sufficiently smooth kernels are the sample functions of the derivative Gaussian processes <cit.>, <ref> directly allows to compute a high probability Lipschitz constant for the unknown function f(·) from the prior Gaussian process distribution. This is summarized in the following Theorem. Consider a zero mean Gaussian process defined through the covariance kernel k(·,·) with continuous partial derivatives up to the fourth order and partial derivative kernels k^∂ i(x,x') =∂^2/∂ x_i∂ x_i' k(x,x') ∀ i=1,…, d. Then, a sample function f(·) of the Gaussian process is almost surely continuous on 𝕏 and with probability of at least 1-δ_L, L_f≤L̂_f=[ f_sup(δ_L/2d,k^∂ 1(·,·),r); ⋮; f_sup(δ_L/2d,k^∂ d(·,·),r) ] for f_sup(·,·,·) defined in (<ref>). Continuity of the sample function f(x) follows directly from <cit.>. Furthermore, this theorem guarantees that the derivative functions ∂/∂ x_if(x) are samples from derivative Gaussian processes with covariance functions k^∂ i(x,x'). Therefore, we can apply <ref> to each of the derivative processes and obtain with probability of at least 1-δ_L/d sup_x∈𝕏|∂/∂ x_if(x)| ≤ f_sup(δ_L/2d,k^∂ i(·,·),r). Applying the union bound over all partial derivative processes i=1,…,d finally yields the result. Since many practically employed kernels such as, e.g., the squared exponential, the Matern 5/2, satisfy the required smoothness assumption of <ref>, this assumption does not pose a severe restriction. Therefore, this theorem allows to straightforwardly determine high probability Lipschitz constants for the unknown function f(·), which can be directly used in <ref>, while barely requiring additional assumptions. § DATA DEPENDENCY OF LEARNING ERROR BOUNDS In order to derive conditions for ensuring that the learning error bound in <ref> is below a given threshold as described <ref>, we need to analyze its dependency on the training data density. For this purpose, we investigate the decay behavior of the probabilistic uniform error bound (<ref>) depending on the decrease rate of the GP standard deviation in <ref>. A kernel-dependent measure of data density is proposed in <ref> in order to bound the decrease rate of the GP standard deviation. Finally, it is shown in <ref> how the kernel-dependent density measure can be bounded using straightforwardly computable Euclidean distances. §.§ Asymptotic Bounds for the Learning Error Since the probabilistic uniform error bound (<ref>) consists of two summands, a vanishing posterior standard deviation σ(x) is not by itself sufficient to guarantee a decreasing value of η(x). Therefore, it is necessary to additionally vary the parameter τ, such that γ(τ) decreases with growing number of training samples N. Even though this leads to a growing value of β_𝕏(τ), it ensures an asymptotically vanishing learning error bound in the limits N→∞ and σ(x)→ 0 as shown in the following theorem. Consider a zero mean Gaussian process defined by the continuously differentiable kernel k(·,·). Let f:𝕏→ℝ be a continuous unknown function with Lipschitz constant L_f on the compact domain 𝕏 which satisfies <ref>. Then, for τ∈𝒪(1/N), the learning error asymptotically behaves as η(x)∈𝒪(√(log(N/δ))σ(x)+1/N). Due to Theorem <ref> with suitable value of β_𝕏(τ) it holds that sup_x∈𝕏|f(x)-μ(x)|≤√(β_𝕏(τ))σ(x)+γ(τ) with probability of at least 1-δ/2 for δ∈(0,1). A trivial bound for the covering number can be obtained by considering a uniform grid over the cube containing 𝕏. This approach leads to M(τ,𝕏)≤(r√(d)/2τ)^d. Therefore, we have β_𝕏(τ)≤ 2dlog(r√(d)/2τ)-2log(δ). In order to derive a bound for γ(τ), we employ the bounds for the Lipschitz constants and modulus of continuity. The Lipschitz constant L_μ in (<ref>) is bounded by L_μ ≤ L_k√(N) (K+σ_on^2I_N)^-1y due to <ref>. Since the Gram matrix K is positive semidefinite and f(·) is bounded by some f̅ due to Lipschitz continuity and a compact domain 𝕏, we can bound (K+σ_on^2I_N)^-1y by (K+σ_on^2I_N)^-1y ≤y/λ_min(K+σ_on^2I_N) ≤√(N)f̅ +ϵ/σ_on^2, where ϵ is a vector of N i.i.d. zero mean Gaussian random variables with variance σ_on^2. Therefore, it follows that ϵ^2/σ_on^2∼χ_N^2. Due to <cit.>, with probability of at least 1-exp(-log(2/δ)) we have ϵ^2≤(2√(Nlog(2/δ))+2log(2/δ)+N)σ_on^2. Hence, the Lipschitz constant of the posterior mean function μ(·) satisfies with probability of at least 1-δ/2 L_μ≤ L_kNf̅+√(N(2√(Nlog(2/δ))+2log(2/δ)+N))σ_on/σ_on^2. It can clearly be seen that the fastest growing term is increasing linearly, such that it holds that L_μ∈𝒪(N) with probability of at least 1-δ/2. The modulus of continuity in (<ref>) can be bounded by ω_σ(τ)≤√(2L_kτ) due to <ref>. Since the unknown function f(·) is assumed to admit a Lipschitz constant L_f, we obtain γ(τ)≤ L_kτNf̅+√(N(2√(Nlog(2/δ))+2log(2/δ)+N))σ_on/σ_on^2 +√(2β_𝕏(τ)L_kτ) +L_fτ. with probability of at least 1-δ/2 by substituting (<ref>) and (<ref>) into (<ref>). In order to admit asymptotically vanishing error bounds, (<ref>) must converge to 0 for N→∞, which is only ensured if τ decreases faster than 𝒪(1/N). Therefore, set τ∈𝒪(1/N) in order to guarantee γ_N(τ)∈𝒪( 1/N). However, this choice of τ implies that β_𝕏(τ)∈𝒪(log(N/δ)) due to (<ref>). Therefore, it directly follows that √(β_𝕏(τ))σ(x)+γ(τ)∈𝒪(√(log(N/δ))σ(x)+1/N), which concludes the proof. Due to the linear dependency of the bound for the Lipschitz constant L_μ on the number of training samples, the virtual grid constant must decay faster than 𝒪(1/N). This in turn leads to a logarithmic growth of β_𝕏(τ), which causes the √(log(N)) increase of the scaling factor of the posterior standard deviation σ(x). Note that this is a common phenomenon in uniform error bounds for GP regression and can also be found in RKHS based approaches, where similar bounds as (<ref>) are used to bound the effect of the noise <cit.>. §.§ Asymptotic Bounds for the Posterior Variance In order to compensate the growth of the scaling factor in <ref>, a sufficiently fast decay of the standard deviation σ(x) must be ensured. Therefore, we investigate the behavior of the posterior variance σ^2(x) depending on the training data density of an input data set 𝔻^x={x^(i)}_i=1^N. The starting point of this analysis is the following lemma, which provides a straightforward upper bound for the posterior variance σ^2(x). Consider a GP trained using a data set with input training samples 𝔻^x. Then, the posterior variance is bounded by=-1 σ^2(x) ≤σ_on^2k(x,x)+NΔ k(x)/N max_x'∈𝔻^x k(x',x')+σ_on^2, where Δ k(x)= k(x,x)max_x'∈𝔻^x k(x',x') -min_x'∈𝔻^x k^2(x',x). Since K+σ_on^2I_N is a positive definite, quadratic matrix, it follows that σ^2(x) ≤ k(x,x)- k(x)^2/λ_max(K)+σ_on^2. Applying the Gershgorin theorem <cit.> the maximal eigenvalue is bounded by λ_max(K)≤ N max_x'∈𝔻^x k(x',x'). Furthermore, due to the definition of k(x) we have k(x)^2≥ N min_x'∈𝔻^x k^2(x',x). Therefore, σ^2(x) can be bounded by σ^2(x) ≤ k(x,x)- Nmin_x'∈𝔻^x k^2(x',x)/N max_x'∈𝔻^x k(x',x')+σ_on^2. Finally, the proof follows from the definition of Δ k(x). This theorem does not pose any restriction on the employed kernel, but strongly depends on the particular choice of kernel. Therefore, it can be difficult to interpret. However, it can be significantly simplified for specific kernels, as shown in the following corollary for stationary covariance functions. Consider a GP with stationary kernel and input training samples 𝔻^x. Then, the posterior variance is bounded by=-1 σ^2(x)≤ k(0)-min_x'∈𝔻^xk^2(x-x')/k(0) +σ_on^2/N. The proof follows directly from <ref> and the fact that max_x'∈𝔻^xk(x',x')= k(0) since the kernel is stationary. In this special case of <ref>, which has been previously stated, e.g., in <cit.>, the kernel induces a notion of proximity, where the absence of training inputs x' with k(x-x')≈ 0 leads to a large bound for the posterior variance σ^2(x). Therefore, this corollary shows that it is desirable to have data close to the test point x as measured by k(·) for stationary kernels. Since <ref> and <ref> still consider the full input data set 𝔻^x, a single sample with k(x',x)≈ 0 can practically lead to the trivial bound σ^2(x)≲ k(x,x). This is clearly an undesired behavior for a bound since it would imply that additional data can potentially increase the posterior variance bound. In order to avoid this effect, we make use of an important property of Gaussian process posterior variances, which is the fact that σ^2(x) is non-increasing with the number of training samples N <cit.>. Therefore, we can consider subsets of 𝔻^x to compute the posterior variance bounds in <ref> and <ref>, which exclude these training samples with a negative effect on the bound. Due to the importance of Δ k(x) for these bounds, we make use of the following subset 𝕂_ρ'(x) ={x'∈𝔻^x: k^2(x,x)≤ k^2(x',x')≤1/ρ'+k^2(x',x) } for this purpose. It can be easily seen that considering only the subset 𝕂_ρ'(x)⊂𝔻^x in (<ref>) ensures k(x,x)max_x'∈𝕂_ρ'(x) k(x',x') -min_x'𝕂_ρ'(x) k^2(x',x)≤1/ρ'. Since the consideration of a subset of 𝔻^x also reduces the number of considered training samples in (<ref>), we trade-off the size of 𝕂_ρ'(x) and the ensured value for Δ k(x) by defining ρ' using the following optimization problem ρ(x)= max_ρ'∈ℝ_+ρ' such that |𝕂_ρ'(x)|≥ρ'σ_on^2k(x,x). It can easily be seen that ρ(x) is well-defined since the optimization problem is always feasible for ρ'→ 0. Moreover, it can be directly used as a measure of data density as shown in the following proposition. Consider a zero mean Gaussian process defined by the kernel k(·,·). If k(x,x)≠ 0, the posterior standard deviation at x satisfies σ(x)≤√(2/ρ(x)k(x,x)) such that it behaves as σ(x)∈𝒪( 1/√(ρ(x)) ). By exploiting the fact that the posterior variance σ^2(x) is non-increasing with the number of training samples N <cit.> and considering only samples inside the set 𝕂_ρ(x)(x) for the computation of the posterior standard deviation, we obtain=-1 σ^2(x) ≤σ_on^2k(x,x)+|𝕂_ρ(x)(x)| Δ k(x)/|𝕂_ρ(x)(x)| max_x'∈𝕂_ρ(x)(x) k(x',x')+σ_on^2 due to <ref>. Since x'∈𝕂_ρ(x)(x) implies k(x',x')≥ k(x,x), we can simplify this expression to σ^2(x) ≤σ_on^2/|𝕂_ρ(x)(x)| +Δ k(x)/k(x,x). Moreover, it can be straightforwardly checked that the restriction to 𝕂_ρ(x)(x) implies Δ k(x)≤1/ρ(x), which yields σ^2(x) ≤σ_on^2k(x,x)/|𝕂_ρ(x)(x)| k(x,x)+1/ρ(x)k(x,x) Since |𝕂_ρ(x)(x)| is lower bounded by ρ(x)σ_on^2k(x,x) by definition, we obtain σ^2(x) ≤2/ρ(x)k(x,x), which directly implies σ(x)∈𝒪(1/√(ρ(x))). concluding the proof. It can be clearly seen that ρ(x) is a measure of data density which is highly specific for each particular GP and therefore is capable of reflecting the requirements on good data distributions posed by the employed kernel k(·,·). Moreover, it immediately follows from <ref> that a sufficiently fast growth of ρ(x), i.e., ρ(x)∉𝒪(log(N)), guarantees a vanishing error bound |μ(x)-f(x)|→ 0. Therefore, ρ(·) satisfies the requirements posed on a suitable measure of data density in <ref>. §.§ Conditions for Specific Kernels The high flexibility of <ref> allows its application to GPs with arbitrary kernels, but comes at the price of a difficult interpretability. However, when we fix a specific kernel, it is often possible to derive more accessible and intuitive subsets contained in 𝕂_ρ'(x), as shown in the following lemma for linear, squared exponential and Matérn class kernels. Geometrically interpretable subsets of 𝕂_ρ'(x) defined in (<ref>) are given by * the set ℍ_ρ'^c(x)={ x'∈𝔻^x: x'^2(x'^2-cx^2) ≤1/ρ', x≤x', |x^Tx'|≥ cxx'}⊂𝕂_ρ'(x) for every c∈(0,1);=-1 * the Euclidean ball 𝔹_√(1/2L_∂ kσ_f^2ρ')(x)= {x'∈𝔻^x: x-x'≤√(1/2L_∂ kσ_f^2ρ')}⊂𝕂_ρ'(x) for isotropic SE or Matérn kernels with ν≥3/2 and σ_f^2=k(x,x). Due to the definition of the linear kernel, we have the identity k^2(x',x')-k^2(x',x)= x'^4-(x^Tx')^2. For |x^Tx'|/(xx')≥ c, we therefore obtain k^2(x',x')-k^2(x',x)≤x'^2(x'^2-cx^2). Finally, the first inequality in (<ref>) yields the requirement k^2(x,x)=x^4≤x'^4= k^2(x',x'), which concludes the first part of the proof. For the second part of the proof, we exploit the continuous differentiability of Matérn kernels with ν≥3/2 and squared exponential kernels together with the fact that their derivative at r=x-x'=0 is 0. Therefore, we have k(x-x')≥σ_f^2-L_∂ kx-x'^2. where L_∂ k∈ℝ_+ is the Lipschitz constant of the kernel derivative. Using this lower bound, we obtain k^2(0)-k^2(x-x') ≤ 2L_∂ kσ_f^2x-x'^2-L_∂ k^2x-x'^4, which we can simplify to k^2(0)-k^2(x-x') ≤ 2L_∂ kσ_f^2x-x'^2 due to non-negativity of the norm. Therefore, x-x'^2≤ρ'/2L_∂ kσ_f^2 implies |k^2(x,x)-k^2(x,x')|≤ρ'. Since k(x,x)=k(x',x') for isotropic kernels, the first inequality is always satisfied, concluding the proof. This lemma illustrates the flexibility of quantifying the data density using 𝕂_ρ'(x). While this set can be innerapproximated by a ball for Matérn and SE kernels as illustrated in <ref>, it looks more like segments of a sphere for linear kernels. Since we can easily determine the volume of such simple geometrical structures, <ref> enables the derivation of a straightforward relationship between the sampling distributions and data density ρ(x). For example, when training samples in 𝔻^x are generated by drawing from a uniform distribution, the number of points in a Euclidean ball is proportional to the volume of the ball, i.e., 𝔹_ρ'(x)∝N/ρ'^d. Therefore, it follows from (<ref>) that ρ(x)∈𝒪(N^1/d+1) for SE or Matérn kernels with uniformly drawn input training samples. This in turn implies that σ(x)∈𝒪(1/N^1/2d+2) due to <ref> and consequently |μ(x)-f(x)|∈𝒪(log(N)/N^1/2d+2) due to <ref>. This demonstrates the flexibility and effectiveness of the derived formalism for bounding the asymptotic decay of the prediction error |μ(x)-f(x)| presented in this section.=-1 § SAFETY GUARANTEES FOR CONTROL OF UNKNOWN DYNAMICAL SYSTEMS We employ the theoretical results for GP error bounds introduced in the previous sections to develop an iterative approach for ensuring arbitrary tracking accuracy with the considered control law (<ref>). For this purpose, we derive a time-varying tracking error bound in <ref> which depends explicitly on the uniform GP error bound along the reference trajectory. This result allows us to analyze the asymptotic decay of the tracking error bound depending on the training data density measured by ρ(x) in <ref>. Finally, we employ the obtained insight to develop an episodic approach for ensuring arbitrary tracking accuracy in <ref>. §.§ Probabilistic Tracking Error Bound Since <ref> ensures distinct eigenvalues of the matrix A_θ defining the closed-loop behavior of the dynamics (<ref>) of the tracking error e=x-x_ref, we can compute the eigendecomposition A_θ=UΛU^-1, where Λ is a diagonal matrix consisting of the eigenvalues of A_θ. This allows the derivation of a dynamic bound for the tracking error e inspired by the comparison principle <cit.>, as shown in the following theorem.=-1 Consider a linear system (<ref>) satisfying <ref>, which is perturbed by a L_f-Lipschitz nonlinearity f(·) satisfying <ref>. Assume that a zero mean Gaussian process with L_k-Lipschitz stationary kernel is used to learn a model f̂(·)=μ(·) of f(·), such that a controller (<ref>) is used to track the bounded reference x_ref. Then, the tracking error is bounded by x(t)-x_ref(t)≤υ(t) with probability of at least 1-δ, where υ(t) is the solution of the linear dynamical system υ̇=(λ_max(A_θ)+L_σζ√(β_𝕏(τ)))υ + ζη(x_ref) with initial condition υ(0)=UUe(0) and constant ζ=UU^-1b. Due to the error dynamics in (<ref>), its solution is given by e(t) = e^A_θte(0)+∫_0^t e^A_θ (t-t') b f_e(t')dt', where f_e(t)=f(x(t))-μ(x(t)). Therefore, we directly obtain e(t)≤e^A_θte(0)+∫_0^t e^A_θ (t-t') b |f̅_e(t')|dt', where f̅_e(t) can be any function such that |f_e(t)|≤f̅_e(t). Using the eigendecomposition of A_θ=UΛU^-1, it can be directly seen that e^A_θtb≤UU^-1be^λ_max(A_θ)t. Hence, we obtain e(t)≤ UU^-1e(0)e^λ_max(A_θ)t +UU^-1b∫_0^t e^λ_max(A_θ) (t-t') |f_e(t')|dt'. The right handside of this inequality is again the solution of a differential equation such hat e(t)≤υ̃ for υ̇̃̇=λ_max(A_θ)υ̃+UU^-1bf̅_e(t) with υ̃(0)=UU^-1e(0). It remains to derive a bound f̅_e(t) for |f_e(t)| in (<ref>). Due to <ref>, it holds that |f_e(t)|≤η_N(x(t)) for all x∈𝕏 with probability of at least 1-δ. Moreover, we have η_N(x(t))≤η_N(x_ref(t))+L_σ√(β_𝕏(τ))e(t) due to Lipschitz continuity of σ(·) guaranteed by <ref>. Therefore, it follows that υ̇̃̇≤(λ_max(A_θ)+L_σζ√(β_𝕏(τ)))υ̃ + ζη(x_ref), which concludes the proof. Since η(x_ref) can be directly computed at any time instant, determining the tracking error bound using <ref> simply requires simulating the linear dynamical system (<ref>). This can be straightforwardly done for a given time horizon in contrast to similar prior approaches <cit.>, where the uniform error bound needs to be determined at the actual system state x. In order to achieve this improved practical applicability, additional requirements on the stability of the linear dynamics described by A_θ are necessary. It is obvious that (<ref>) only remains bounded if the linear dynamics (<ref>) are stable, which can be straightforwardly shown to require λ_max(A_θ)<-L_σζ√(β_𝕏(τ)). Due to the dependency of the eigenvalue λ_max(A_θ) on the parameters θ, this condition can be satisfied if θ≥α^-1(-L_σζ√(β_𝕏(τ))). Therefore, this condition effectively poses a lower bound on the admissible control gains. §.§ Dependency of Accuracy Guarantees on Data Density While <ref> provides an accurate bound for the tracking error depending on the local data density, it is challenging to apply this result to the asymptotic analysis of the tracking error. Therefore, we bound the maximum tracking error along the reference trajectory as shown in the following proposition. Consider a linear system (<ref>) satisfying <ref>, which is perturbed by a L_f-Lipschitz nonlinearity f(·) satisfying <ref>. Assume that a zero mean Gaussian process with L_k-Lipschitz stationary kernel is used to learn a model f̂(·)=μ(·) of f(·), such that a controller (<ref>) is used to track the bounded reference x_ref. If (<ref>) is satisfied, then, for e(0)=0, the maximum tracking error is bounded by sup_t≥ 0e(t)≤υ̅ with probability of at least 1-δ, where υ̅ = -ζ/λ_max(A_θ)+L_σζ√(β_𝕏(τ))sup_t≥ 0η(x_ref(t)). It immediately follows from (<ref>) that e(t) ≤ ζ∫_0^t e^(λ_max(A_θ)+L_σζ√(β_𝕏(τ))) (t-t') dt'sup_0≤ t'≤ tη(x_ref(t')). Since the integral can be straightforwardly calculated, we obtain sup_t≥ 0e(t)≤ -ζsup_t≥ 0η(x_ref(t))/λ_max(A_θ)+L_σζ√(β_𝕏(τ)), which concludes the proof. Note that the restriction to a zero initial condition is only considered to simplify the derivation, but the extension to non-zero initial conditions is straightforward. Therefore, the assumptions of <ref> are not more restrictive than those of <ref>. In order to analyze the asymptotic behavior of the tracking error, we combine <ref> with <ref>. Using the shorthand notation ρ=inf_t≥ 0ρ(x_ref(t)), this results in the following theorem. Consider a linear system (<ref>) satisfying <ref>, which is perturbed by a L_f-Lipschitz nonlinearity f(·) satisfying <ref>. Assume that a zero mean Gaussian process with L_k-Lipschitz stationary kernel is used to learn a model f̂(·)=μ(·) of f(·), such that a controller (<ref>) is used to track the bounded reference x_ref. Choose τ such that β_𝕏(τ)≥γ^2(τ)ρk(0)/2 and θ such that κ=-2ζ√(β_𝕏(τ))/λ_max(A_θ)+L_σζ√(β_𝕏(τ)) is constant and (<ref>) is satisfied. Then, for e(0)=0, the maximum tracking error bound asymptotically behaves as υ̅∈𝒪(1/√(ρ)). We first focus on the asymptotic behavior of the maximum learning error bound along the reference sup_t≥ 0η(x_ref(t)), which can be expressed as sup_t≥ 0η(x_ref(t)) = √(β_𝕏(τ))sup_t≥ 0σ(x_ref(t))+γ(τ). Due to <ref>, the considered parameter β_𝕏(τ) implies sup_t≥ 0σ(x_ref(t))≥γ(τ)/√(β_𝕏(τ)), such that we can simplify the learning error bound to sup_t≥ 0η(x_ref(t)) ≤ 2√(β_𝕏(τ))sup_t≥ 0σ(x_ref(t)). Therefore, it follows from proposition <ref> that υ̅ = κsup_t≥ 0σ(x_ref(t)), whose asymptotic behavior only depends on σ(x_ref(t)) due to the assumed constant value of κ̃, i.e., υ̅∈𝒪(sup_t≥ 0σ(x_ref(t))). Due to <ref>, we have sup_t≥ 0σ(x_ref(t))∈𝒪(1/√(ρ)), which concludes the proof. This theorem establishes a direct relationship between the minimum data density ρ along the reference trajectory x_ref(t) and the maximum of the tracking error e, showing that an arbitrarily small tracking error can be guaranteed when suitable data is available. Since this requires a vanishing γ(τ), β_𝕏(τ) must grow. The chosen β_𝕏(τ) in <ref> satisfies this property. In order to see this note that √(β_𝕏(τ)) is growing with decreasing τ and γ(τ)∈𝒪(Nτ) holds for stationary kernels. Therefore, we can set τ∝1/(N√(ρ)), which directly yields β_𝕏(τ)∝log(N√(ρ)). Due to condition (<ref>), this increase rate of β_𝕏(τ) finally requires reducing eigenvalues -λ_max(A_θ)∝√(log(N√(ρ))). While this increase requirement might seem like a restrictive assumption, it is important to note that without learning, it follows from the proof of <ref> that -λ_max(A_θ)∝1/υ̅. In contrast, we immediately obtain ρ∝1/υ̅^2 from (<ref>), such that -λ_max(A_θ)∝√(log(N/υ̅)) holds. Assuming the number of training samples N grows at most polynomially with ρ as ensured, e.g., for the case of SE or Matérn kernels with uniformly distributed training data discussed in <ref>, this finally implies -λ_max(A_θ)∈𝒪(√(log(1/υ̅))). Therefore, the requirement on the growth rate for ensuring arbitrarily small tracking errors reduces from hyperbolic to log-hyperbolic with suitable training data.=-1 §.§ Episodic Data Generation for Prescribed Performance Although <ref> provides conditions for training data to ensure an arbitrarily small tracking error e, it does not provide direct insights how suitable training data sets can be obtained. Therefore, we develop an episodic approach for generating training data sets in this section. For simplicity, we consider a constant sampling time T_s∈ℝ_+ during each episode with execution time T_p∈ℝ_+, which yields data sets of the form 𝔻_N^T_s={(x(iT_s),f(x(iT_s))+ϵ^(i)) }_i=0^N_p, where N_p = ⌊ 1+T_p/T_s⌋ denotes the number of training samples gathered during one episode. Therefore, the tracking error bound υ̅ from one episode immediately provides guarantees for the training data of the next episode. We exploit this by adjusting the sampling time T_s and the maximum eigenvalue λ_max(A_θ) as demonstrated in <ref> in order to ensure a sufficiently small error bound for the next episode. This dependency on the sampling time is emphasized by an index T_s in the posterior standard deviation σ_T_s(·). As shown in the following theorem, this approach guarantees the termination of <ref> after a finite number of iterations. Consider a linear system (<ref>) satisfying <ref>, which is perturbed by a L_f-Lipschitz nonlinearity f(·) satisfying <ref>. Assume that a zero mean Gaussian process with L_k-Lipschitz stationary kernel is used to learn a model f̂(·)=μ(·) of f(·), such that a controller (<ref>) is used to track the bounded reference x_ref. If θ and T_s are chosen such that -λ_max(A_θ) ≥8√(L_∂ k) )+ξ L_σ/ξζ√(β_𝕏(τ)) max_0≤ t ≤ T_pσ^2_T_s(x_ref(t)) ≤ 16L_∂ kυ̅_i-1^2 holds in every episode for ξ<1, <ref> terminates after at most N_E=⌈log(4e̅√(L_∂ k))-log(√(k(0)))/log(ξ)⌉ episodes with probability of at least 1-N_Eδ. It is straightforward to see that (<ref>) together with <ref> implies υ̅_0 =κ√(k(0)), υ̅_i+1 =4√(L_∂ k)κυ̅_i for τ such that (<ref>) is satisfied, where the index i is used to denote the episode. Since 4√(L_∂ k)κ≤ξ<1 holds due to (<ref>), it immediately follows that υ̅_i decays exponentially, i.e., υ̅_i=ξ^iυ̅_0 with probability of at least 1-δ for each episode. Therefore, <ref> is guaranteed to terminate after N_E episodes with probability of at least 1-N_Eδ due to the union bound. =-1 Due to the exponential decay of the tracking error bound υ̅ ensured by <ref>, <ref> quickly terminates. This comes at the price of higher requirements (<ref>) on the eigenvalues of A_θ compared to <ref>. However, the difference is merely a constant factor, and it is indeed straightforward to see that -λ_max(A_θ)∝1/√(log(e̅)) is sufficient to compensate the effect of an increasing β_𝕏(τ) for all polynomially growing data sets. Therefore, this requirement is still significantly lower compared to ensuring the tracking error bound e̅ without learning as discussed in <ref>. While the results in previous sections posed requirements on the data distribution in terms of the data density ρ(x), <ref> explicitly considers the data generation process by providing an upper bound for the sampling time T_s in (<ref>). Due to the form of this condition, it cannot be computed before the controller is applied to the system, but it can easily be verified a posteriori. Therefore, we can ensure it via a sufficiently high sampling rate during the application of the controller, such that we simply can downsample the obtained data to the necessary sampling time T_s. The required maximum sampling rate can be bounded using the following proposition. Consider a linear system (<ref>) satisfying <ref>, which is perturbed by a L_f-Lipschitz nonlinearity f(·) satisfying <ref>. Assume that a zero mean Gaussian process with L_k-Lipschitz stationary kernel is used to learn a model f̂(·)=μ(·) of f(·), such that a controller (<ref>) is used to track the continuous, bounded reference x_ref. Then, the sampling time T_s required by condition (<ref>) in <ref> is bounded by T_s≥T_s=16L_∂ ke̅^3/σ_on^2max_0≤ t ≤ T_pẋ(t). We prove this proposition by deriving a value of T_s which satisfies (<ref>) Due to <ref>, (<ref>) is guaranteed to hold if ρ≥1/(8L_∂ kσ_f^2 υ̅_i-1^2). Set ρ'=1/(8L_∂ kσ_f^2 υ̅_i-1^2). Then, it follows from <ref> that 𝔹_2υ_i-1(x_ref(t))⊂𝕂_ρ'(x_ref(t)). The Euclidean ball around x_ref(t) on the left handside can be inner bounded by a Euclidean ball with half the radius around the actual trajectory, i.e., 𝔹_υ̅_i-1(x(t))⊂𝔹_2υ̅_i-1(x_ref(t)). The smaller Euclidean ball has a diameter of υ̅_i-1 and the actual trajectory passes through its center. Moreover, the distance between two samples can be bounded by T_s max_0≤ t ≤ T_pẋ(t). Note that the maximum temporal derivative of the state is bounded. In order to see this, note that we can express the dynamics of the system as ẋ=ẋ_ref+A_θe+b(f(x)-μ(x). Due to the bounded prediction error, the bounded tracking error and the continuous reference trajectory, we can therefore bound the state derivative by max_0≤ t≤ T_pẋ(t) ≤(A_θ+√(β_𝕏(τ))L_σ)υ̅_i+max_0≤ t≤ T_pη(x_ref(t)) +max_0≤ t≤ T_pẋ_ref(t). This allows us to bound the number of points in 𝕂_ρ'(x_ref(t)) by |𝕂_ρ'(x_ref(t))|≥ |𝔹_υ̅_i-1(x(t))|≥2υ̅_i-1/ T_smax_0≤ t ≤ T_pẋ(t). For ρ≥ρ', it must hold that 2υ_i-1/ T_smax_0≤ t ≤ T_pẋ(t)≥ρ'σ_on^2k(0)=σ_on^2/8L_∂ kυ_i-1^2 due to (<ref>). This inequality can be ensured to hold by setting T_s=16L_∂ kυ̅^3_i-1/σ_on^2max_0≤ t ≤ T_pẋ(t), which concludes the proof. § NUMERICAL EVALUATION In order to demonstrate the flexibility and effectiveness of the derived theoretical results, we compare the tracking error bounds with empirically observed tracking errors in different simulations. In <ref>, we evaluate the time-varying tracking error bound for training data unevenly distributed over the relevant part of the state space 𝕏. The behavior of the asymptotic error bound is investigated in <ref>. Finally, we demonstrate the effectiveness of the proposed episodic data generation approach for ensuring a desired tracking accuracy in <ref>. §.§ Data-dependency of Safety Regions For evaluating the time-varying tracking error bound, we consider a nonlinear dynamical system ẋ_1=x_2, ẋ_2=f(x)+g(x)u, where and g(x)= 1+1/2sin(x_2/2), which is a marginal variation of the system considered in <cit.>. Assuming exact knowledge of g(·), we can approximately feedback linearize this system and apply a linear tracking controller u_lin=-θ_1θ_2 x_2-θ_2 x_2, where θ_1,θ_2∈ℝ_+ are design parameters. This yields a two-dimensional system of the form (<ref>) with A_θ=[ 0 1; -θ_1θ_2 -θ_2 ] b=[ 0; 1 ]. In order to demonstrate the effect of the distribution, we use a uniform grid over [0 3]×[-4 4] with 25 points and σ_on^2 = 0.01 as training data set, such that half of the considered state space 𝕏 =[-5 5]^2 is not covered by training data. A SE kernel with automatic relevance determination is employed for Gaussian process regression and the hyperparameters are optimized using likelihood maximization. For computing the uniform prediction error bound in <ref>, we set τ=0.01, δ=0.01 and L_f=2. The task is to track the circular reference trajectory x_d(t) = 2sin(t) with state x_1, which leads to the reference trajectory x_ref(t)=[2sin(t) 2cos(t)]^T. We aim to achieve this using θ_1=10 and θ_2=20, which can be shown to satisfy condition (<ref>). Snapshots of the resulting trajectory together with visualizations of the tracking error bounds obtained using <ref> are illustrated in <ref>. When the GP standard deviation σ(x_ref) is large, the tracking error bound υ(t) starts to increase, such that it reaches its maximum just before the system enters the region with low standard deviation. Afterwards, the feedback controller reduces the tracking error until the standard deviation starts to increase again. This leads to the minimum tracking error bound illustrated on the left of <ref>. This effect can also be seen at the observed tracking error as illustrated in <ref>, which has its peaks at times when the tracking error bound υ is large. Therefore, the tracking error bound υ reflects the behavior of the observed error e well, even though it is rather conservative. The sources of this conservatism can be easily investigated by determining the bound obtained when using the true model error |f(x_ref)-μ(x_ref)| as input in (<ref>). It is clearly visible that even with the knowledge of the true prediction error, the tracking error bound exhibits some conservatism due the linearization around the reference trajectory x_ref. The remaining conservatism is a consequence of the prediction error bound η(x_ref) as visualized at the bottom of <ref>. Even though this bound reflects the availability of data well, it needs to capture the probabilistic worst case and is therefore considerably larger than the actual prediction error |f(x_ref)-μ(x_ref)|. This leads to the fact that the tracking error bound υ conservatively reflects the behavior of the observed tracking error e. Note that the usage of a probabilistic Lipschitz constant L̂_f obtained via <ref> does not significantly change this behavior. The corresponding tracking error bound merely becomes slightly larger since we can compensate the conservative value of L̂_f using a smaller value τ=10^-3. Therefore, <ref> enables the effective computation of prediction error bounds without knowledge of a Lipschitz constant of the unknown function f(·). §.§ Dependency of the Tracking Accuracy on the Data Density In order to investigate the dependency of the tracking error bound υ on the data density ρ in more detail, we consider the same setting as in <ref>, but use grids with different grid constants defined on [-4,4]^2 as training data sets, such that they cover the whole relevant domain. Due to the varying size of the training data set, we determine τ by finding the maximum value satisfying (<ref>) using a line search. We set θ_1=θ_2=θ, such that we can compute a gain θ ensuring κ=10 in (<ref>) for the obtained value of τ. The resulting tracking errors e and bounds sup_t≥0υ(t) obtained with <ref> for different data densities ρ are illustrated in <ref>. Moreover, the asymptotic decay rate of υ̅ guaranteed by <ref> is depicted. It can be clearly seen that the asymptotic decay rate closely reflects the actual decay rate of the error bound sup_t≥0υ(t). Analogously to <ref>, the tracking error bound is rather conservative, but the observed error e exhibits a decay rate with high similarity to its bound sup_t≥0υ(t). Despite this conservatism, the necessary maximum eigenvalues λ_max(A_θ) for ensuring a low desired tracking error bound sup_t≥0υ(t) with such training data are significantly larger than without a controller compensating the nonlinearity as depicted in <ref>. This baseline comparison can be straightforwardly obtained as λ_max(A_θ)≥ζf̅/e̅ by slightly adapting the proof of <ref> using |f(x)|≤f̅ and μ(x)=0. Due to the linear growth of this condition with 1/e̅, it quickly exceeds the maximum eigenvalue λ_max(A_θ) ensuring the same tracking error bound through the learned controller, even though we use the non-conservative bound f̅=3. This clearly demonstrates the benefits of the derived theoretical results. §.§ Episodic Data Generation For evaluating the episodic data generation using <ref>, we consider the same setting as in <ref>. Moreover, we set θ_1=θ_2=θ analogously to the previous section and choose θ such that ξ=0.95 holds in every iteration. A high frequency data set with sampling time 3· 10^-4 is generated in every episode, such that a line search can be used to determine the maximum value of T_s satisfying (<ref>). The tracking error bounds obtained form <ref> with these parameters are exemplarily illustrated for several different episodes in <ref>. Due to the constant sampling time, the training data density along the reference is very similar within an episode, which directly leads to the rather minor variations in the tracking error bound over time. Moreover, it can be seen that decrease of the tracking error bound υ is significantly larger during the first few episodes, before it slows down. This becomes even clearer when plotting the behavior of the error bound over the number of episodes as depicted in <ref>. During the first 10 episodes the error bound sup_t≥0υ(t) decays faster than the guaranteed rate of ξ^N_Eυ̅_0, which is guaranteed by <ref>. This can be attributed to the fact that even a single additional data point reduces the posterior variance more than required for (<ref>) at the beginning. Once a sufficiently large number of additional training samples is necessary to ensure (<ref>), this inaccuracy is overcome and the error bound sup_t≥0υ(t) closely follows the guaranteed decrease rate. In fact, the tracking error bound sup_t≥0υ(t), while being rather conservative similar to the previous simulations, even reflects the behavior of the actually observed tracking error e accurately after 10 episodes. Note that this unexpected fast decay at the beginning has no influence on the required maximum eigenvalues λ_max(A_θ) as depicted in <ref>. While smaller eigenvalues are required for the episodic approach compared to the asymptotic analysis in <ref>, the maximum eigenvalue λ_max(A_θ) used in <ref> closely follow the expected 𝒪( log(1/sup_t≥0υ(t))) behavior. Moreover, it can be directly seen that <ref> offers a significant advantage over a direct reduction of the tracking error bound using the maximum eigenvalue λ_max(A_θ) without a compensation of the nonlinearity. Note that the sampling time T_s necessary to achieve this behavior quickly decays as illustrated in <ref>. However, since it remains significantly larger than its theoretical bound T_s, it remains in magnitudes which can be realized in practice. Therefore, <ref> provides an effective method for generating data, such that an arbitrary tracking error can be ensured when using a GP model for compensating unknown nonlinearities in systems of the form of (<ref>).=-1 § CONCLUSION This paper presents a novel, episodic approach for learning GP models in order to ensure an arbitrarily high desired tracking accuracy using the GP to compensate unknown nonlinearities in linear systems. We first derive a novel Bayesian prediction error bound for GP regression and demonstrate the straightforward computability of all required parameters. In order to establish a straightforwardly interpretable connection between training data and prediction accuracy, we propose a kernel-dependent measure of data density and show that the prediction error bound vanishes with increasing data density. We exploit the Bayesian error bounds to derive a time-varying tracking error bound when using the GP model to compensate unknown nonlinearities, and show that the tracking accuracy grows with increasing data density. These theoretical results allow us to develop an episodic approach for learning a GP model, such that a desired tracking error bound can be guaranteed. The effectiveness of our theoretical results is demonstrated in several simulations.=-1 IEEEtran [ < g r a p h i c s > ]Armin Lederer (S'20) received the B.Sc. and M.Sc. degree in electrical engineering and information technology from the Technical University of Munich, Germany, in 2015 and 2018, respectively. Since June 2018, he has been a PhD student at the Chair of Information-oriented Control, Department of Electrical and Computer Engineering at the Technical University of Munich, Germany. His current research interests include the stability of data-driven control systems and machine learning in closed-loop systems. [ < g r a p h i c s > ]Jonas Umlauft (S’14) received the B.Sc. and M.Sc. degree in electrical engineering and information technology from the Technical University of Mu- nich, Germany, in 2013 and 2015, respectively. His Master’s thesis was completed at the Computational and Biological Learning Group at the University of Cambridge, UK. Since May 2015, he has been a PhD student at the Chair of Information-oriented Control, Department of Electrical and Computer Engineering at the Technical University of Munich, Germany. His current research interests include the stability of data-driven control systems and system identification based on Gaussian processes. [ < g r a p h i c s > ]Sandra Hirche (M'03–SM'11–F'20) received the Dipl.-Ing degree in aeronautical engineering from the Technical University of Berlin, Berlin, Germany, in 2002, and the Dr. Ing. degree in electrical engineering from the Technical University of Munich, Munich, Germany, in 2005. From 2005 to 2007, she was awarded a Post-doctoral scholarship from the Japanese Society for the Promotion of Science at the Fujita Laboratory, Tokyo Institute of Technology, Tokyo, Japan. From 2008 to 2012, she was an Associate Professor with the Technical University of Munich. Since 2013, she has served as Technical University of Munich Liesel Beckmann Distinguished Professor and has been with the Chair of Information-Oriented Control, Department of Electrical and Computer Engineering, Technical University of Munich. She has authored or coauthored more than 150 papers in international journals, books, and refereed conferences. Her main research interests include cooperative, distributed, and networked control with applications in human–machine interaction, multirobot systems, and general robotics. Dr. Hirche has served on the editorial boards of the IEEE Transactions on Control of Network Systems, the IEEE Transactions on Control Systems Technology, and the IEEE Transactions on Haptics. She has received multiple awards such as the Rohde & Schwarz Award for her Ph.D. thesis, the IFAC World Congress Best Poster Award in 2005, and – together with students – the 2018 Outstanding Student Paper Award of the IEEE Conference on Decision and Control as well as Best Paper Awards from IEEE Worldhaptics and the IFAC Conference of Manoeuvring and Control of Marine Craft in 2009.